Máté Eckl [Thu, 12 Jul 2018 15:48:06 +0000 (17:48 +0200)]
netfilter: nft_socket: Expose socket mark
Signed-off-by: Máté Eckl <ecklm94@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Máté Eckl [Thu, 12 Jul 2018 15:18:46 +0000 (17:18 +0200)]
netfilter: nft_socket: Break evaluation if no socket found
Actual implementation stores 0 in the destination register if no socket
is found by the lookup, but that is not intentional as it is not really
a value of any socket metadata.
This patch fixes this and breaks rule evaluation in this case.
Fixes:
554ced0a6e29 ("netfilter: nf_tables: add support for native socket matching")
Signed-off-by: Máté Eckl <ecklm94@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Fri, 13 Jul 2018 12:54:44 +0000 (14:54 +0200)]
netfilter: nf_osf: add struct nf_osf_hdr_ctx
Wrap context that allow us to guess the OS into a structure.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pablo Neira Ayuso [Fri, 13 Jul 2018 12:54:43 +0000 (14:54 +0200)]
netfilter: nf_osf: add nf_osf_match_one()
This new function allows us to check if there is TCP syn packet matching
with a given fingerprint that can be reused from the upcoming new
nf_osf_find() function.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 11 Jul 2018 11:45:14 +0000 (13:45 +0200)]
netfilter: nf_tables: use dedicated mutex to guard transactions
Continue to use nftnl subsys mutex to protect (un)registration of hook types,
expressions and so on, but force batch operations to do their own
locking.
This allows distinct net namespaces to perform transactions in parallel.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 11 Jul 2018 11:45:13 +0000 (13:45 +0200)]
netfilter: nf_tables: avoid global info storage
This works because all accesses are currently serialized by nfnl
nf_tables subsys mutex.
If we want to have per-netns locking, we need to make this scratch
area pernetns or allocate it on demand.
This does the latter, its ~28kbyte but we can fallback to vmalloc
so it should be fine.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 11 Jul 2018 11:45:12 +0000 (13:45 +0200)]
netfilter: nf_tables: take module reference when starting a batch
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 11 Jul 2018 11:45:11 +0000 (13:45 +0200)]
netfilter: nf_tables: make valid_genid callback mandatory
always call this function, followup patch can use this to
aquire a per-netns transaction log to guard the entire batch
instead of using the nfnl susbsys mutex (which is shared among all
namespaces).
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 11 Jul 2018 11:45:10 +0000 (13:45 +0200)]
netfilter: nf_tables: add and use helper for module autoload
module autoload is problematic, it requires dropping the mutex that
protects the transaction. Once the mutex has been dropped, another
client can start a new transaction before we had a chance to abort
current transaction log.
This helper makes sure we first zap the transaction log, then
drop mutex for module autoload.
In case autload is successful, the caller has to reply entire
message anyway.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Gao Feng [Mon, 9 Jul 2018 10:06:33 +0000 (18:06 +0800)]
netfilter: Remove useless param helper of nf_ct_helper_ext_add
The param helper of nf_ct_helper_ext_add is useless now, then remove
it now.
Signed-off-by: Gao Feng <gfree.wind@vip.163.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Julian Anastasov [Fri, 6 Jul 2018 05:25:54 +0000 (08:25 +0300)]
ipvs: drop conn templates under attack
Before now, connection templates were ignored by the random
dropentry procedure. But Michal Koutný suggests that we
should add exception for connections under SYN attack.
He provided patch that implements it for TCP:
<quote>
IPVS includes protection against filling the ip_vs_conn_tab by
dropping 1/32 of feasible entries every second. The template
entries (for persistent services) are never directly deleted by
this mechanism but when a picked TCP connection entry is being
dropped (1), the respective template entry is dropped too (realized
by expiring 60 seconds after the connection entry being dropped).
There is another mechanism that removes connection entries when they
time out (2), in this case the associated template entry is not deleted.
Under SYN flood template entries would accumulate (due to their entry
longer timeout).
The accumulation takes place also with drop_entry being enabled. Roughly
15% ((31/32)^60) of SYN_RECV connections survive the dropping mechanism
(1) and are removed by the timeout mechanism (2)(defaults to 60 seconds
for SYN_RECV), thus template entries would still accumulate.
The patch ensures that when a connection entry times out, we also remove
the template entry from the table. To prevent breaking persistent
services (since the connection may time out in already established state)
we add a new entry flag to protect templates what spawned at least one
established TCP connection.
</quote>
We already added ASSURED flag for the templates in previous patch, so
that we can use it now to decide which connection templates should be
dropped under attack. But we also have some cases that need special
handling.
We modify the dropentry procedure as follows:
- Linux timers currently use LIFO ordering but we can not rely on
this to drop controlling connections. So, set cp->timeout to 0
to indicate that connection was dropped and that on expiration we
should try to drop our controlling connections. As result, we can
now avoid the ip_vs_conn_expire_now call.
- move the cp->n_control check above, so that it avoids restarting
the timer for controlling connections when not needed.
- drop unassured connection templates here if they are not referred
by any connections.
On connection expiration: if connection was dropped (cp->timeout=0)
try to drop our controlling connection except if it is a template
in assured state.
In ip_vs_conn_flush change order of ip_vs_conn_expire_now calls
according to the LIFO timer expiration order. It should work
faster for controlling connections with single controlled one.
Suggested-by: Michal Koutný <mkoutny@suse.com>
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Julian Anastasov [Fri, 6 Jul 2018 05:25:53 +0000 (08:25 +0300)]
ipvs: add assured state for conn templates
cp->state was not used for templates. Add support for state bits
and for the first "assured" bit which indicates that some
connection controlled by this template was established or assured
by the real server. In a followup patch we will use it to drop
templates under SYN attack.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Julian Anastasov [Fri, 6 Jul 2018 05:25:52 +0000 (08:25 +0300)]
ipvs: provide just conn to ip_vs_state_name
In preparation for followup patches, provide just the cp
ptr to ip_vs_state_name.
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Martynas Pumputis [Mon, 2 Jul 2018 14:52:14 +0000 (16:52 +0200)]
netfilter: nf_conntrack: resolve clash for matching conntracks
This patch enables the clash resolution for NAT (disabled in
"
590b52e10d41") if clashing conntracks match (i.e. both tuples are equal)
and a protocol allows it.
The clash might happen for a connections-less protocol (e.g. UDP) when
two threads in parallel writes to the same socket and consequent calls
to "get_unique_tuple" return the same tuples (incl. reply tuples).
In this case it is safe to perform the resolution, as the losing CT
describes the same mangling as the winning CT, so no modifications to
the packet are needed, and the result of rules traversal for the loser's
packet stays valid.
Signed-off-by: Martynas Pumputis <martynas@weave.works>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:44 +0000 (17:33 -0700)]
netfilter: nf_conncount: Add list lock and gc worker, and RCU for init tree search
This patch is originally from Florian Westphal.
This patch does the following 3 main tasks.
1) Add list lock to 'struct nf_conncount_list' so that we can
alter the lists containing the individual connections without holding the
main tree lock. It would be useful when we only need to add/remove to/from
a list without allocate/remove a node in the tree. With this change, we
update nft_connlimit accordingly since we longer need to maintain
a list lock in nft_connlimit now.
2) Use RCU for the initial tree search to improve tree look up performance.
3) Add a garbage collection worker. This worker is schedule when there
are excessive tree node that needed to be recycled.
Moreover,the rbnode reclaim logic is moved from search tree to insert tree
to avoid race condition.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:43 +0000 (17:33 -0700)]
netfilter: nf_conncount: Split insert and traversal
This patch is originally from Florian Westphal.
When we have a very coarse grouping, e.g. by large subnets, zone id,
etc, it's likely that we do not need to do tree rotation because
we'll find a node where we can attach new entry. Based on this
observation, we split tree traversal and insertion.
Later on, we can make traversal lockless (tree protected
by RCU), and add extra lock in the individual nodes to protect list
insertion/deletion, thereby allowing parallel insert/delete in different
tree nodes.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:42 +0000 (17:33 -0700)]
netfilter: nf_conncount: Move locking into count_tree()
This patch is originally from Florian Westphal.
This is a preparation patch to allow lockless traversal
of the tree via RCU.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:41 +0000 (17:33 -0700)]
netfilter: nf_conncount: Early exit in nf_conncount_lookup() and cleanup
This patch is originally from Florian Westphal.
This patch does the following three tasks.
It applies the same early exit technique for nf_conncount_lookup().
Since now we keep the number of connections in 'struct nf_conncount_list',
we no longer need to return the count in nf_conncount_lookup().
Moreover, we expose the garbage collection function nf_conncount_gc_list()
for nft_connlimit.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:40 +0000 (17:33 -0700)]
netfilter: nf_conncount: Switch to plain list
Original patch is from Florian Westphal.
This patch switches from hlist to plain list to store the list of
connections with the same filtering key in nf_conncount. With the
plain list, we can insert new connections at the tail, so over time
the beginning of list holds long-running connections and those are
expired, while the newly creates ones are at the end.
Later on, we could probably move checked ones to the end of the list,
so the next run has higher chance to reclaim stale entries in the front.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Yi-Hung Wei [Tue, 3 Jul 2018 00:33:39 +0000 (17:33 -0700)]
netfilter: nf_conncount: Early exit for garbage collection
This patch is originally from Florian Westphal.
We use an extra function with early exit for garbage collection.
It is not necessary to traverse the full list for every node since
it is enough to zap a couple of entries for garbage collection.
Signed-off-by: Yi-Hung Wei <yihung.wei@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Máté Eckl [Tue, 10 Jul 2018 14:01:28 +0000 (16:01 +0200)]
netfilter: Kconfig: Change select IPv6 dependencies
... from IPV6 to NF_TABLES_IPV6 and IP6_NF_IPTABLES.
In some cases module selects depend on IPV6, but this means that they
select another module even if eg. NF_TABLES_IPV6 is not set in which
case the selected module is useless due to the lack of IPv6 nf_tables
functionality.
The same applies for IP6_NF_IPTABLES and iptables.
Joint work with: Arnd Bermann <arnd@arndb.de>
Signed-off-by: Máté Eckl <ecklm94@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:51 +0000 (07:46 +0200)]
netfilter: conntrack: remove l3proto abstraction
This unifies ipv4 and ipv6 protocol trackers and removes the l3proto
abstraction.
This gets rid of all l3proto indirect calls and the need to do
a lookup on the function to call for l3 demux.
It increases module size by only a small amount (12kbyte), so this reduces
size because nf_conntrack.ko is useless without either nf_conntrack_ipv4
or nf_conntrack_ipv6 module.
before:
text data bss dec hex filename
7357 1088 0 8445 20fd nf_conntrack_ipv4.ko
7405 1084 4 8493 212d nf_conntrack_ipv6.ko
72614 13689 236 86539 1520b nf_conntrack.ko
19K nf_conntrack_ipv4.ko
19K nf_conntrack_ipv6.ko
179K nf_conntrack.ko
after:
text data bss dec hex filename
79277 13937 236 93450 16d0a nf_conntrack.ko
191K nf_conntrack.ko
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:50 +0000 (07:46 +0200)]
netfilter: conntrack: remove get_timeout() indirection
Not needed, we can have the l4trackers fetch it themselvs.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:49 +0000 (07:46 +0200)]
netfilter: conntrack: avoid l4proto pkt_to_tuple calls
Handle common protocols (udp, tcp, ..), in the core and only
do the call if needed by the l4proto tracker.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:48 +0000 (07:46 +0200)]
netfilter: conntrack: avoid calls to l4proto invert_tuple
Handle the common cases (tcp, udp, etc). in the core and only
do the indirect call for the protocols that need it (GRE for instance).
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:47 +0000 (07:46 +0200)]
netfilter: conntrack: remove get_l4proto indirection from l3 protocol trackers
Handle it in the core instead.
ipv6_skip_exthdr() is built-in even if ipv6 is a module, i.e. this
doesn't create an ipv6 dependency.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:46 +0000 (07:46 +0200)]
netfilter: conntrack: remove invert_tuple indirection from l3 protocol trackers
Its simpler to just handle it directly in nf_ct_invert_tuple().
Also gets rid of need to pass l3proto pointer to resolve_conntrack().
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:45 +0000 (07:46 +0200)]
netfilter: conntrack: remove pkt_to_tuple indirection from l3 protocol trackers
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 29 Jun 2018 05:46:44 +0000 (07:46 +0200)]
netfilter: conntrack: remove ctnetlink callbacks from l3 protocol trackers
handle everything from ctnetlink directly.
After all these years we still only support ipv4 and ipv6, so it
seems reasonable to remove l3 protocol tracker support and instead
handle ipv4/ipv6 from a common, always builtin inet tracker.
Step 1: Get rid of all the l3proto->func() calls.
Start with ctnetlink, then move on to packet-path ones.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Máté Eckl [Thu, 28 Jun 2018 18:01:02 +0000 (20:01 +0200)]
netfilter: Kconfig: Make NETFILTER_XT_MATCH_SOCKET select NF_SOCKET_IPV4/6
Instead of depending on it.
Signed-off-by: Máté Eckl <ecklm94@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 25 Jun 2018 15:55:32 +0000 (17:55 +0200)]
openvswitch: use nf_ct_get_tuplepr, invert_tuplepr
These versions deal with the l3proto/l4proto details internally.
It removes only caller of nf_ct_get_tuple, so make it static.
After this, l3proto->get_l4proto() can be removed in a followup patch.
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 25 Jun 2018 15:49:59 +0000 (17:49 +0200)]
netfilter: utils: move nf_ip6_checksum* from ipv6 to utils
similar to previous change, this also allows to remove it
from nf_ipv6_ops and avoid the indirection.
It also removes the bogus dependency of nf_conntrack_ipv6 on ipv6 module:
ipv6 checksum functions are built into kernel even if CONFIG_IPV6=m,
but ipv6/netfilter.o isn't.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Mon, 25 Jun 2018 15:49:43 +0000 (17:49 +0200)]
netfilter: utils: move nf_ip_checksum* from ipv4 to utils
allows to make nf_ip_checksum_partial static, it no longer
has an external caller.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Máté Eckl [Mon, 18 Jun 2018 13:12:52 +0000 (15:12 +0200)]
netfilter: nft_tproxy: Move nf_tproxy_assign_sock() to nf_tproxy.h
This function is also necessary to implement nft tproxy support
Fixes:
45ca4e0cf273 ("netfilter: Libify xt_TPROXY")
Signed-off-by: Máté Eckl <ecklm94@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 15 Jun 2018 21:46:42 +0000 (23:46 +0200)]
netfilter: flowtables: use fixed renew timeout on teardown
This is one of the very few external callers of ->get_timeouts(),
We can use a fixed timeout instead, conntrack core will refresh this in
case a new packet comes within this period.
Use of ESTABLISHED timeout seems way too huge anyway.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Taehee Yoo [Mon, 11 Jun 2018 16:54:47 +0000 (01:54 +0900)]
netfilter: nft_reject_bridge: remove unnecessary ttl set
In the nft_reject_br_send_v4_tcp_reset(), a ttl is set by the
nf_reject_iphdr_put(). so, below code is unnecessary.
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
David S. Miller [Mon, 16 Jul 2018 07:13:40 +0000 (00:13 -0700)]
Merge branch 'TLS-offload-rx-netdev-and-mlx5'
Boris Pismenny says:
====================
TLS offload rx, netdev & mlx5
The following series provides TLS RX inline crypto offload.
v5->v4:
- Remove the Kconfig to mutually exclude both IPsec and TLS
v4->v3:
- Remove the iov revert for zero copy send flow
v2->v3:
- Fix typo
- Adjust cover letter
- Fix bug in zero copy flows
- Use network byte order for the record number in resync
- Adjust the sequence provided in resync
v1->v2:
- Fix bisectability problems due to variable name changes
- Fix potential uninitialized return value
This series completes the generic infrastructure to offload TLS crypto to
a network devices. It enables the kernel TLS socket to skip decryption and
authentication operations for SKBs marked as decrypted on the receive
side of the data path. Leaving those computationally expensive operations
to the NIC.
This infrastructure doesn't require a TCP offload engine. Instead, the
NIC decrypts a packet's payload if the packet contains the expected TCP
sequence number. The TLS record authentication tag remains unmodified
regardless of decryption. If the packet is decrypted successfully and it
contains an authentication tag, then the authentication check has passed.
Otherwise, if the authentication fails, then the packet is provided
unmodified and the KTLS layer is responsible for handling it.
Out-Of-Order TCP packets are provided unmodified. As a result,
in the slow path some of the SKBs are decrypted while others remain as
ciphertext.
The GRO and TCP layers must not coalesce decrypted and non-decrypted SKBs.
At the worst case a received TLS record consists of both plaintext
and ciphertext packets. These partially decrypted records must be
reencrypted, only to be decrypted.
The notable differences between SW KTLS and NIC offloaded TLS
implementations are as follows:
1. Partial decryption - Software must handle the case of a TLS record
that was only partially decrypted by HW. This can happen due to packet
reordering.
2. Resynchronization - tls_read_size calls the device driver to
resynchronize HW whenever it lost track of the TLS record framing in
the TCP stream.
The infrastructure should be extendable to support various NIC offload
implementations. However it is currently written with the
implementation below in mind:
The NIC identifies packets that should be offloaded according to
the 5-tuple and the TCP sequence number. If these match and the
packet is decrypted and authenticated successfully, then a syndrome
is provided to software. Otherwise, the packet is unmodified.
Decrypted and non-decrypted packets aren't coalesced by the network stack,
and the KTLS layer decrypts and authenticates partially decrypted records.
The NIC provides an indication whenever a resync is required. The resync
operation is triggered by the KTLS layer while parsing TLS record headers.
Finally, we measure the performance obtained by running single stream
iperf with two Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz machines connected
back-to-back with Innova TLS (40Gbps) NICs. We compare TCP (upper bound)
and KTLS-Offload running both in Tx and Rx. The results show that the
performance of offload is comparable to TCP.
| Bandwidth (Gbps) | CPU Tx (%) | CPU rx (%)
TCP | 28.8 | 5 | 12
KTLS-Offload-Tx-Rx | 28.6 | 7 | 14
Paper: https://netdevconf.org/2.2/papers/pismenny-tlscrypto-talk.pdf
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:52 +0000 (14:33 +0300)]
net/mlx5e: IPsec, fix byte count in CQE
This patch fixes the byte count indication in CQE for processed IPsec
packets that contain a metadata header.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:51 +0000 (14:33 +0300)]
net/mlx5: Accel, add common metadata functions
This patch adds common functions to handle mellanox metadata headers.
These functions are used by IPsec and TLS to process FPGA metadata.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:50 +0000 (14:33 +0300)]
net/mlx5e: TLS, build TLS netdev from capabilities
This patch enables TLS Rx based on available HW capabilities.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:49 +0000 (14:33 +0300)]
net/mlx5e: TLS, add software statistics
This patch adds software statistics for TLS to count important
events.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:48 +0000 (14:33 +0300)]
net/mlx5e: TLS, add Innova TLS rx data path
Implement the TLS rx offload data path according to the
requirements of the TLS generic NIC offload infrastructure.
Special metadata ethertype is used to pass information to
the hardware.
When hardware loses synchronization a special resync request
metadata message is used to request resync.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:47 +0000 (14:33 +0300)]
net/mlx5e: TLS, add innova rx support
Add the mlx5 implementation of the TLS Rx routines to add/del TLS
contexts, also add the tls_dev_resync_rx routine
to work with the TLS inline Rx crypto offload infrastructure.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:46 +0000 (14:33 +0300)]
net/mlx5: Accel, add TLS rx offload routines
In Innova TLS, TLS contexts are added or deleted
via a command message over the SBU connection.
The HW then sends a response message over the same connection.
Complete the implementation for Innova TLS (FPGA-based) hardware by
adding support for rx inline crypto offload.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:45 +0000 (14:33 +0300)]
net/mlx5e: TLS, refactor variable names
For symmetry, we rename mlx5e_tls_offload_context to
mlx5e_tls_offload_context_tx before we add mlx5e_tls_offload_context_rx.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Reviewed-by: Aviad Yehezkel <aviadye@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:44 +0000 (14:33 +0300)]
tls: Fix zerocopy_from_iter iov handling
zerocopy_from_iter iterates over the message, but it doesn't revert the
updates made by the iov iteration. This patch fixes it. Now, the iov can
be used after calling zerocopy_from_iter.
Fixes:
3c4d75591 ("tls: kernel TLS support")
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:43 +0000 (14:33 +0300)]
tls: Add rx inline crypto offload
This patch completes the generic infrastructure to offload TLS crypto to a
network device. It enables the kernel to skip decryption and
authentication of some skbs marked as decrypted by the NIC. In the fast
path, all packets received are decrypted by the NIC and the performance
is comparable to plain TCP.
This infrastructure doesn't require a TCP offload engine. Instead, the
NIC only decrypts packets that contain the expected TCP sequence number.
Out-Of-Order TCP packets are provided unmodified. As a result, at the
worst case a received TLS record consists of both plaintext and ciphertext
packets. These partially decrypted records must be reencrypted,
only to be decrypted.
The notable differences between SW KTLS Rx and this offload are as
follows:
1. Partial decryption - Software must handle the case of a TLS record
that was only partially decrypted by HW. This can happen due to packet
reordering.
2. Resynchronization - tls_read_size calls the device driver to
resynchronize HW after HW lost track of TLS record framing in
the TCP stream.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:42 +0000 (14:33 +0300)]
tls: Fill software context without allocation
This patch allows tls_set_sw_offload to fill the context in case it was
already allocated previously.
We will use it in TLS_DEVICE to fill the RX software context.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:41 +0000 (14:33 +0300)]
tls: Split tls_sw_release_resources_rx
This patch splits tls_sw_release_resources_rx into two functions one
which releases all inner software tls structures and another that also
frees the containing structure.
In TLS_DEVICE we will need to release the software structures without
freeeing the containing structure, which contains other information.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:40 +0000 (14:33 +0300)]
tls: Split decrypt_skb to two functions
Previously, decrypt_skb also updated the TLS context.
Now, decrypt_skb only decrypts the payload using the current context,
while decrypt_skb_update also updates the state.
Later, in the tls_device Rx flow, we will use decrypt_skb directly.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:39 +0000 (14:33 +0300)]
tls: Refactor tls_offload variable names
For symmetry, we rename tls_offload_context to
tls_offload_context_tx before we add tls_offload_context_rx.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:38 +0000 (14:33 +0300)]
tcp: Don't coalesce decrypted and encrypted SKBs
Prevent coalescing of decrypted and encrypted SKBs in GRO
and TCP layer.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:37 +0000 (14:33 +0300)]
net: Add TLS rx resync NDO
Add new netdev tls op for resynchronizing HW tls context
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilya Lesokhin [Fri, 13 Jul 2018 11:33:36 +0000 (14:33 +0300)]
net: Add TLS RX offload feature
This patch adds a netdev feature to configure TLS RX inline crypto offload.
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Boris Pismenny [Fri, 13 Jul 2018 11:33:35 +0000 (14:33 +0300)]
net: Add decrypted field to skb
The decrypted bit is propogated to cloned/copied skbs.
This will be used later by the inline crypto receive side offload
of tls.
Signed-off-by: Boris Pismenny <borisp@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 16 Jul 2018 07:10:01 +0000 (00:10 -0700)]
Merge branch 'mvpp2-add-debugfs-interface'
Maxime Chevallier says:
====================
net: mvpp2: add debugfs interface
The PPv2 Header Parser and Classifier are not straightforward to debug,
having easy access to some of the many lookup tables configuration is
helpful during development and debug.
This series adds a basic debugfs interface, allowing to read data from
the Header Parser and some of the Classifier tables.
For now, the interface is read-only, and contains only some basic info.
This was actually used during RSS development, and might be useful to
troubleshoot some issues we might find.
The first patch of the series converts the mvpp2 files to SPDX, which
eases adding the new debugfs dedicated file.
The second patch adds the interface, and exposes basic Header Parser data.
The 3rd patch adds a hit counter for the Header Parser TCAM.
The 4th patch exposes classifier info.
The 5th patch adds some hit counters for some of the classifier engines.
Changes since V1:
- Rebased on the lastest net-next
- Made cls_flow_get non static so that it can be used in mvpp2_debugfs
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Sat, 14 Jul 2018 11:29:28 +0000 (13:29 +0200)]
net: mvpp2: debugfs: add classifier hit counters
The classification operations that are used for RSS make use of several
lookup tables. Having hit counters for these tables is really helpful
to determine what flows were matched by ingress traffic, and see the
path of packets among all the classifier tables.
This commit adds hit counters for the 3 tables used at the moment :
- The decoding table (also called lookup_id table), that links flows
identified by the Header Parser to the flow table.
There's one entry per flow, located at :
.../mvpp2/<controller>/flows/XX/dec_hits
Note that there are 21 flows in the decoding table, whereas there are
52 flows in the Header Parser. That's because there are several kind
of traffic that will match a given flow. Reading the hit counter from
one sub-flow will clear all hit counter that have the same flow_id.
This also applies to the flow_hits.
- The flow table, that contains all the different lookups to be
performed by the classifier for each packet of a given flow. The match
is done on the first entry of the flow sequence.
- The C2 engine entries, that are used to assign the default rx queue,
and enable or disable RSS for a given port.
There's one entry per flow, located at:
.../mvpp2/<controller>/flows/XX/flow_hits
There is one C2 entry per port, so the c2 hit counter is located at :
.../mvpp2/<controller>/ethX/c2_hits
All hit counter values are 16-bits clear-on-read values.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Sat, 14 Jul 2018 11:29:27 +0000 (13:29 +0200)]
net: mvpp2: debugfs: add entries for classifier flows
The classifier configuration for RSS is quite complex, with several
lookup tables being used. This commit adds useful info in debugfs to
see how the different tables are configured :
Added 2 new entries in the per-port directory :
- .../eth0/default_rxq : The default rx queue on that port
- .../eth0/rss_enable : Indicates if RSS is enabled in the C2 entry
Added the 'flows' directory :
It contains one entry per sub-flow. a 'sub-flow' is a unique path from
Header Parser to the flow table. Multiple sub-flows can point to the
same 'flow' (each flow has an id from 8 to 29, which is its index in the
Lookup Id table) :
- .../flows/00/...
/01/...
...
/51/id : The flow id. There are 21 unique flows. There's one
flow per combination of the following parameters :
- L4 protocol (TCP, UDP, none)
- L3 protocol (IPv4, IPv6)
- L3 parameters (Fragmented or not)
- L2 parameters (Vlan tag presence or not)
.../type : The flow type. This is an even higher level flow,
that we manipulate with ethtool. It can be :
"udp4" "tcp4" "udp6" "tcp6" "ipv4" "ipv6" "other".
.../eth0/...
.../eth1/engine : The hash generation engine used for this
flow on the given port
.../hash_opts : The hash generation options indicating on
what data we base the hash (vlan tag, src
IP, src port, etc.)
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Sat, 14 Jul 2018 11:29:26 +0000 (13:29 +0200)]
net: mvpp2: debugfs: add hit counter stats for Header Parser entries
One helpful feature to help debug the Header Parser TCAM filter in PPv2
is to be able to see if the entries did match something when a packet
comes in. This can be done by using the built-in hit counter for TCAM
entries.
This commit implements reading the counter, and exposing its value on
debugfs for each filter entry.
The counter is a 16-bits clear-on-read value, located at:
.../mvpp2/<controller>/parser/XXX/hits
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Maxime Chevallier [Sat, 14 Jul 2018 11:29:25 +0000 (13:29 +0200)]
net: mvpp2: add a debugfs interface for the Header Parser
Marvell PPv2 Packer Header Parser has a TCAM based filter, that is not
trivial to configure and debug. Being able to dump TCAM entries from
userspace can be really helpful to help development of new features
and debug existing ones.
This commit adds a basic debugfs interface for the PPv2 driver, focusing
on TCAM related features.
<mnt>/mvpp2/ ---
f2000000.ethernet
\-
f4000000.ethernet --- parser --- 000 ...
| \- 001
| \- ...
| \- 255 --- ai
| \- header_data
| \- lookup_id
| \- sram
| \- valid
\- eth1 ...
\- eth2 --- mac_filter
\- parser_entries
\- vid_filter
There's one directory per PPv2 instance, named after pdev->name to make
sure names are uniques. In each of these directories, there's :
- one directory per interface on the controller, each containing :
- "mac_filter", which lists all filtered addresses for this port
(based on TCAM, not on the kernel's uc / mc lists)
- "parser_entries", which lists the indices of all valid TCAM
entries that have this port in their port map
- "vid_filter", which lists the vids allowed on this port, based on
TCAM
- one "parser" directory (the parser is common to all ports), containing :
- one directory per TCAM entry (256 of them, from 0 to 255), each
containing :
- "ai" : Contains the 1 byte Additional Info field from TCAM, and
- "header_data" : Contains the 8 bytes Header Data extracted from
the packet
- "lookup_id" : Contains the 4 bits LU_ID
- "sram" : contains the raw SRAM data, which is the result of the TCAM
lookup. This readonly at the moment.
- "valid" : Indicates if the entry is valid of not.
All entries are read-only, and everything is output in hex form.
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Antoine Tenart [Sat, 14 Jul 2018 11:29:24 +0000 (13:29 +0200)]
net: mvpp2: switch to SPDX identifiers
Use the appropriate SPDX license identifiers and drop the license text.
This patch is only cosmetic.
Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com>
Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 15 Jul 2018 01:47:44 +0000 (18:47 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-07-15
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Various different arm32 JIT improvements in order to optimize code emission
and make the JIT code itself more robust, from Russell.
2) Support simultaneous driver and offloaded XDP in order to allow for advanced
use-cases where some work is offloaded to the NIC and some to the host. Also
add ability for bpftool to load programs and maps beyond just the cgroup case,
from Jakub.
3) Add BPF JIT support in nfp for multiplication as well as division. For the
latter in particular, it uses the reciprocal algorithm to emulate it, from Jiong.
4) Add BTF pretty print functionality to bpftool in plain and JSON output
format, from Okash.
5) Add build and installation to the BPF helper man page into bpftool, from Quentin.
6) Add a TCP BPF callback for listening sockets which is triggered right after
the socket transitions to TCP_LISTEN state, from Andrey.
7) Add a new cgroup tree command to bpftool which iterates over the whole cgroup
tree and prints all attached programs, from Roman.
8) Improve xdp_redirect_cpu sample to support parsing of double VLAN tagged
packets, from Jesper.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Sat, 14 Jul 2018 22:08:41 +0000 (00:08 +0200)]
Merge branch 'bpf-tcp-listen-cb'
Andrey Ignatov says:
====================
This patchset adds TCP-BPF callback for listening sockets.
Patch 0001 provides more details and is the main patch in the set.
Patch 0006 adds selftest for the new callback.
Other patches are bug fixes and improvements in TCP-BPF selftest
to make it easier to extend in 0006.
====================
Acked-by: Lawrence Brakmo <brakmo@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:37 +0000 (17:33 -0700)]
selftests/bpf: Test case for BPF_SOCK_OPS_TCP_LISTEN_CB
Cover new TCP-BPF callback in test_tcpbpf: when listen() is called on
socket, set BPF_SOCK_OPS_STATE_CB_FLAG so that BPF_SOCK_OPS_STATE_CB
callback can be called on future state transition, and when such a
transition happens (TCP_LISTEN -> TCP_CLOSE), track it in the map and
verify it in user space later.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:36 +0000 (17:33 -0700)]
selftests/bpf: Better verification in test_tcpbpf
Reduce amount of copy/paste for debug info when result is verified in
the test and keep that info together with values being checked so that
they won't get out of sync.
It also improves debug experience: instead of checking manually what
doesn't match in debug output for all fields, only unexpected field is
printed.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:35 +0000 (17:33 -0700)]
selftests/bpf: Switch test_tcpbpf_user to cgroup_helpers
Switch to cgroup_helpers to simplify the code and fix cgroup cleanup:
before cgroup was not cleaned up after the test.
It also removes SYSTEM macro, that only printed error, but didn't
terminate the test.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:34 +0000 (17:33 -0700)]
selftests/bpf: Fix const'ness in cgroup_helpers
Lack of const in cgroup helpers signatures forces to write ugly client
code. Fix it.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:33 +0000 (17:33 -0700)]
bpf: Sync bpf.h to tools/
Sync BPF_SOCK_OPS_TCP_LISTEN_CB related UAPI changes to tools/.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrey Ignatov [Thu, 12 Jul 2018 00:33:32 +0000 (17:33 -0700)]
bpf: Add BPF_SOCK_OPS_TCP_LISTEN_CB
Add new TCP-BPF callback that is called on listen(2) right after socket
transition to TCP_LISTEN state.
It fills the gap for listening sockets in TCP-BPF. For example BPF
program can set BPF_SOCK_OPS_STATE_CB_FLAG when socket becomes listening
and track later transition from TCP_LISTEN to TCP_CLOSE with
BPF_SOCK_OPS_STATE_CB callback.
Before there was no way to do it with TCP-BPF and other options were
much harder to work with. E.g. socket state tracking can be done with
tracepoints (either raw or regular) but they can't be attached to cgroup
and their lifetime has to be managed separately.
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
David S. Miller [Sat, 14 Jul 2018 18:23:26 +0000 (11:23 -0700)]
Merge branch 'mlxsw-VRRP'
Ido Schimmel says:
====================
mlxsw: Add VRRP support
When a router that is acting as the default gateway of a host stops
functioning, the host will encounter packet loss until the router starts
functioning again.
To increase the reliability of the default gateway without performing
reconfiguration on the host, a host can use a Virtual Router Redundancy
Protocol (VRRP) Router. This virtual router is composed from several
routers where only one is actually forwarding packets from the host (the
master router) while the other routers act as backup routers. The
election of the master router is determined by the VRRP protocol [1].
Packets addressed to the virtual router are always sent to the virtual
router MAC address (IPv4: 00-00-5E-00-01-XX, IPv6: 00-00-5E-00-02-XX).
Such packets can only be accepted by the master router and must be
discarded by the backup routers.
In Linux, VRRP is usually implemented by configuring a macvlan with the
virtual router MAC on top of the router interface that is connected to
the host / LAN. The macvlan on the master router is assigned the virtual
IP (VIP) that the host uses as its gateway.
In order to support VRRP in mlxsw, we first need to enable macvlan upper
devices on top of mlxsw netdevs and their uppers. This is done by the
first patch, which also takes care of sanitizing macvlan configurations
that are not currently supported by the driver.
The second patch directs packets with destination MAC addresses as the
macvlans to the router so that they will undergo an L3 lookup. This is
consistent with the kernel's behavior where the macvlan's Rx handler
will re-inject such packets to the Rx path so that they will be picked
up by the IPvX protocol handlers and undergo an L3 lookup. Note that the
driver prevents the macvlans from being enslaved to other devices, to
ensure the packets will be picked up by the protocol handler and not by
another Rx handler.
The third patch adds packet traps for VRRP control packets for both IPv4
and IPv6. Finally, the last patch optimizes the reception of VRRP MACs
by potentially skipping one L2 lookup for them.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Sat, 14 Jul 2018 08:39:54 +0000 (11:39 +0300)]
mlxsw: spectrum_router: Optimize processing of VRRP MACs
Hosts using a VRRP router send their packets with a destination MAC of
the VRRP router which is of the following form [1]:
IPv4 - 00-00-5E-00-01-{VRID}
IPv6 - 00-00-5E-00-02-{VRID}
Where VRID is the ID of the virtual router. Such packets are directed to
the router block in the ASIC by an FDB entry that was added in the
previous patch.
However, in certain cases it is possible to skip this FDB lookup and
send such packets directly to the router. This is accomplished by adding
these special MAC addresses to the RIF cache. If the cache is hit, the
packet will skip the L2 lookup and ingress the router with the RIF
specified in the cache entry.
1. https://tools.ietf.org/html/rfc5798#section-7.3
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Sat, 14 Jul 2018 08:39:53 +0000 (11:39 +0300)]
mlxsw: spectrum: Add VRRP traps
Virtual Router Redundancy Protocol packets are used to communicate the
state of the Master router associated with the virtual router ID (VRID).
These are link-local multicast packets sent with IP protocol 112 that
are trapped in the router block in the ASIC.
Add a trap for these packets and mark the trapped packets to prevent
them from potentially being re-flooded by the bridge driver.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Sat, 14 Jul 2018 08:39:52 +0000 (11:39 +0300)]
mlxsw: spectrum_router: Direct macvlans' MACs to router
An IP packet received on a netdev with a macvlan upper whose MAC matches
the packet's destination MAC will be re-injected to the Rx path as if it
was received by the macvlan, and perform an L3 lookup.
Reflect this functionality to the ASIC by programming FDB entries that
will direct MACs of macvlan uppers to the router.
In a similar fashion to router interfaces (RIFs) that are programmed
upon the addition of the first IP address on an interface and destroyed
upon the removal of the last IP address, the FDB entries for the macvlan
are added and destroyed based on the addition of the first and removal
of the last IP address on the macvlan.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Sat, 14 Jul 2018 08:39:51 +0000 (11:39 +0300)]
mlxsw: spectrum: Enable macvlan upper devices
In order to allow more unicast MAC addresses (e.g., VRRP virtual MAC) to
be directed to the router we need to enable macvlan uppers on top of
mlxsw netdevs.
Allow macvlan upper devices on top of mlxsw netdevs and sanitize
configurations that can't work. For example, a macvlan can't be enslaved
to a bridge as without ACLs the device doesn't take the destination MAC
into account when classifying a packet to a bridge instance (i.e., a
FID).
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yafang Shao [Sat, 14 Jul 2018 08:36:29 +0000 (16:36 +0800)]
tcp: remove redundant rcv_nxt update
tcp_rcv_nxt_update() is already executed in tcp_data_queue().
This line is redundant.
See bellow,
tcp_queue_rcv
tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq);
tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); <<<< redundant
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Okash Khawaja [Sat, 14 Jul 2018 04:57:04 +0000 (21:57 -0700)]
bpf: btf: print map dump and lookup with btf info
This patch augments the output of bpftool's map dump and map lookup
commands to print data along side btf info, if the correspondin btf
info is available. The outputs for each of map dump and map lookup
commands are augmented in two ways:
1. when neither of -j and -p are supplied, btf-ful map data is printed
whose aim is human readability. This means no commitments for json- or
backward- compatibility.
2. when either -j or -p are supplied, a new json object named
"formatted" is added for each key-value pair. This object contains the
same data as the key-value pair, but with btf info. "formatted" object
promises json- and backward- compatibility. Below is a sample output.
$ bpftool map dump -p id 8
[{
"key": ["0x0f","0x00","0x00","0x00"
],
"value": ["0x03", "0x00", "0x00", "0x00", ...
],
"formatted": {
"key": 15,
"value": {
"int_field": 3,
...
}
}
}
]
This patch calls btf_dumper introduced in previous patch to accomplish
the above. Indeed, btf-ful info is only displayed if btf data for the
given map is available. Otherwise existing output is displayed as-is.
Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Okash Khawaja [Sat, 14 Jul 2018 04:57:03 +0000 (21:57 -0700)]
bpf: btf: add btf print functionality
This consumes functionality exported in the previous patch. It does the
main job of printing with BTF data. This is used in the following patch
to provide a more readable output of a map's dump. It relies on
json_writer to do json printing. Below is sample output where map keys
are ints and values are of type struct A:
typedef int int_type;
enum E {
E0,
E1,
};
struct B {
int x;
int y;
};
struct A {
int m;
unsigned long long n;
char o;
int p[8];
int q[4][8];
enum E r;
void *s;
struct B t;
const int u;
int_type v;
unsigned int w1: 3;
unsigned int w2: 3;
};
$ sudo bpftool map dump id 14
[{
"key": 0,
"value": {
"m": 1,
"n": 2,
"o": "c",
"p": [15,16,17,18,15,16,17,18
],
"q": [[25,26,27,28,25,26,27,28
],[35,36,37,38,35,36,37,38
],[45,46,47,48,45,46,47,48
],[55,56,57,58,55,56,57,58
]
],
"r": 1,
"s": 0x7ffd80531cf8,
"t": {
"x": 5,
"y": 10
},
"u": 100,
"v": 20,
"w1": 0x7,
"w2": 0x3
}
}
]
This patch uses json's {} and [] to imply struct/union and array. More
explicit information can be added later. For example, a command line
option can be introduced to print whether a key or value is struct
or union, name of a struct etc. This will however come at the expense
of duplicating info when, for example, printing an array of structs.
enums are printed as ints without their names.
Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Okash Khawaja [Sat, 14 Jul 2018 04:57:02 +0000 (21:57 -0700)]
bpf: btf: export btf types and name by offset from lib
This patch introduces btf__resolve_type() function and exports two
existing functions from libbpf. btf__resolve_type follows modifier
types like const and typedef until it hits a type which actually takes
up memory, and then returns it. This function follows similar pattern
to btf__resolve_size but instead of computing size, it just returns
the type.
These functions will be used in the followig patch which parses
information inside array of `struct btf_type *`. btf_name_by_offset is
used for printing variable names.
Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Sat, 14 Jul 2018 02:08:59 +0000 (19:08 -0700)]
tools: include reallocarray feature test in FEATURE_TESTS_BASIC
perf propagates its feature check results to libbpf. This means
features for which perf probes must be a superset of libbpf's
required features. perf depends on FEATURE_TESTS_BASIC for its list
of features.
commit
531b014e7a2f ("tools: bpf: make use of reallocarray") added
reallocarray use to libbpf, make perf also perform the reallocarray
feature check.
Fixes:
531b014e7a2f ("tools: bpf: make use of reallocarray")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
kbuild test robot [Sat, 14 Jul 2018 01:27:35 +0000 (09:27 +0800)]
net: mvpp2: mvpp2_cls_flow_get() can be static
Fixes:
f9358e12a0af ("net: mvpp2: split ingress traffic into multiple flows")
Signed-off-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Wed, 11 Jul 2018 17:45:11 +0000 (19:45 +0200)]
of: mdio: Support fixed links in of_phy_get_and_connect()
By a simple extension of of_phy_get_and_connect() drivers
that have a fixed link on e.g. RGMII can support also
fixed links, so in addition to:
ethernet-port {
phy-mode = "rgmii";
phy-handle = <&foo>;
};
This setup with a fixed-link node and no phy-handle will
now also work just fine:
ethernet-port {
phy-mode = "rgmii";
fixed-link {
speed = <1000>;
full-duplex;
pause;
};
};
This is very helpful for connecting random ethernet ports
to e.g. DSA switches that typically reside on fixed links.
The phy-mode is still there as the fixes link in this case
is still an RGMII link.
Tested on the Cortina Gemini driver with the Vitesse DSA
router chip on a fixed 1Gbit link.
Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Mon, 9 Jul 2018 10:29:11 +0000 (13:29 +0300)]
net: sched: refactor flower walk to iterate over idr
Extend struct tcf_walker with additional 'cookie' field. It is intended to
be used by classifier walk implementations to continue iteration directly
from particular filter, instead of iterating 'skip' number of times.
Change flower walk implementation to save filter handle in 'cookie'. Each
time flower walk is called, it looks up filter with saved handle directly
with idr, instead of iterating over filter linked list 'skip' number of
times. This change improves complexity of dumping flower classifier from
quadratic to linearithmic. (assuming idr lookup has logarithmic complexity)
Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reported-by: Simon Horman <simon.horman@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesper Dangaard Brouer [Fri, 13 Jul 2018 14:35:14 +0000 (16:35 +0200)]
samples/bpf: xdp_redirect_cpu handle parsing of double VLAN tagged packets
People noticed that the code match on IEEE 802.1ad (ETH_P_8021AD) ethertype,
and this implies Q-in-Q or double tagged VLANs. Thus, we better parse
the next VLAN header too. It is even marked as a TODO.
This is relevant for real world use-cases, as XDP cpumap redirect can be
used when the NIC RSS hashing is broken. E.g. the ixgbe driver HW cannot
handle double tagged VLAN packets, and places everything into a single
RX queue. Using cpumap redirect, users can redistribute traffic across
CPUs to solve this, which is faster than the network stacks RPS solution.
It is left as an exerise how to distribute the packets across CPUs. It
would be convenient to use the RX hash, but that is not _yet_ exposed
to XDP programs. For now, users can code their own hash, as I've demonstrated
in the Suricata code (where Q-in-Q is handled correctly).
Reported-by: Florian Maury <florian.maury-cv@x-cli.eu>
Reported-by: Marek Majkowski <marek@cloudflare.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Nikolay Aleksandrov [Fri, 13 Jul 2018 09:16:43 +0000 (12:16 +0300)]
net: ipmr: add support for passing full packet on wrong vif
This patch adds support for IGMPMSG_WRVIFWHOLE which is used to pass
full packet and real vif id when the incoming interface is wrong.
While the RP and FHR are setting up state we need to be sending the
registers encapsulated with all the data inside otherwise we lose it.
The RP then decapsulates it and forwards it to the interested parties.
Currently with WRONGVIF we can only be sending empty register packets
and will lose that data.
This behaviour can be enabled by using MRT_PIM with
val == IGMPMSG_WRVIFWHOLE. This doesn't prevent IGMPMSG_WRONGVIF from
happening, it happens in addition to it, also it is controlled by the same
throttling parameters as WRONGVIF (i.e. 1 packet per 3 seconds currently).
Both messages are generated to keep backwards compatibily and avoid
breaking someone who was enabling MRT_PIM with val == 4, since any
positive val is accepted and treated the same.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Fri, 13 Jul 2018 18:26:36 +0000 (20:26 +0200)]
Merge branch 'bpf-xdp-driver-and-hw'
Jakub Kicinski says:
====================
This set is adding support for loading driver and offload XDP
at the same time. This enables advanced use cases where some
of the work is offloaded to the NIC and some is done by the host.
Separate netlink attributes are added for each mode of operation.
Driver callbacks for offload are cleaned up a little, including
removal of .prog_attached flag.
====================
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:44 +0000 (20:36 -0700)]
nfp: add support for simultaneous driver and hw XDP
Split handling of offloaded and driver programs completely. Since
offloaded programs always come with XDP_FLAGS_HW_MODE set in reality
there could be no sharing, anyway, programs would only be installed
in driver or in hardware. Splitting the handling allows us to install
programs in HW and in driver at the same time.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:43 +0000 (20:36 -0700)]
selftests/bpf: add test for multiple programs
Add tests for having an XDP program attached in the driver and
another one attached in HW simultaneously.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:42 +0000 (20:36 -0700)]
netdevsim: add support for simultaneous driver and hw XDP
Allow netdevsim to accept driver and offload attachment of XDP
BPF programs at the same time.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:41 +0000 (20:36 -0700)]
xdp: support simultaneous driver and hw XDP attachment
Split the query of HW-attached program from the software one.
Introduce new .ndo_bpf command to query HW-attached program.
This will allow drivers to install different programs in HW
and SW at the same time. Netlink can now also carry multiple
programs on dump (in which case mode will be set to
XDP_ATTACHED_MULTI and user has to check per-attachment point
attributes, IFLA_XDP_PROG_ID will not be present). We reuse
IFLA_XDP_PROG_ID skb space for second mode, so rtnl_xdp_size()
doesn't need to be updated.
Note that the installation side is still not there, since all
drivers currently reject installing more than one program at
the time.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:40 +0000 (20:36 -0700)]
xdp: factor out common program/flags handling from drivers
Basic operations drivers perform during xdp setup and query can
be moved to helpers in the core. Encapsulate program and flags
into a structure and add helpers. Note that the structure is
intended as the "main" program information source in the driver.
Most drivers will additionally place the program pointer in their
fast path or ring structures.
The helpers don't have a huge impact now, but they will
decrease the code duplication when programs can be installed
in HW and driver at the same time. Encapsulating the basic
operations in helpers will hopefully also reduce the number
of changes to drivers which adopt them.
Helpers could really be static inline, but they depend on
definition of struct netdev_bpf which means they'd have
to be placed in netdevice.h, an already 4500 line header.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:39 +0000 (20:36 -0700)]
xdp: don't make drivers report attachment mode
prog_attached of struct netdev_bpf should have been superseded
by simply setting prog_id long time ago, but we kept it around
to allow offloading drivers to communicate attachment mode (drv
vs hw). Subsequently drivers were also allowed to report back
attachment flags (prog_flags), and since nowadays only programs
attached will XDP_FLAGS_HW_MODE can get offloaded, we can tell
the attachment mode from the flags driver reports. Remove
prog_attached member.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Jakub Kicinski [Thu, 12 Jul 2018 03:36:38 +0000 (20:36 -0700)]
xdp: add per mode attributes for attached programs
In preparation for support of simultaneous driver and hardware XDP
support add per-mode attributes. The catch-all IFLA_XDP_PROG_ID
will still be reported, but user space can now also access the
program ID in a new IFLA_XDP_<mode>_PROG_ID attribute.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Fri, 13 Jul 2018 13:26:42 +0000 (15:26 +0200)]
Merge branch 'bpf-arm-jit-improvements'
Russell King says:
====================
Four further jit compiler improves for 32-bit ARM.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Russell King [Thu, 12 Jul 2018 20:50:56 +0000 (21:50 +0100)]
ARM: net: bpf: improve 64-bit ALU implementation
Improbe the 64-bit ALU implementation from:
movw r8, #65532
movt r8, #65535
movw r9, #65535
movt r9, #65535
ldr r7, [fp, #-44]
adds r7, r7, r8
str r7, [fp, #-44]
ldr r7, [fp, #-40]
adc r7, r7, r9
str r7, [fp, #-40]
to:
movw r8, #65532
movt r8, #65535
movw r9, #65535
movt r9, #65535
ldrd r6, [fp, #-44]
adds r6, r6, r8
adc r7, r7, r9
strd r6, [fp, #-44]
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Russell King [Thu, 12 Jul 2018 20:50:51 +0000 (21:50 +0100)]
ARM: net: bpf: improve 64-bit store implementation
Improve the 64-bit store implementation from:
ldr r6, [fp, #-8]
str r8, [r6]
ldr r6, [fp, #-8]
mov r7, #4
add r7, r6, r7
str r9, [r7]
to:
ldr r6, [fp, #-8]
str r8, [r6]
str r9, [r6, #4]
We leave the store as two separate STR instructions rather than using
STRD as the store may not be aligned, and STR can handle misalignment.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Russell King [Thu, 12 Jul 2018 20:50:46 +0000 (21:50 +0100)]
ARM: net: bpf: improve 64-bit sign-extended immediate load
Improve the 64-bit sign-extended immediate from:
mov r6, #1
str r6, [fp, #-52] ; 0xffffffcc
mov r6, #0
str r6, [fp, #-48] ; 0xffffffd0
to:
mov r6, #1
mov r7, #0
strd r6, [fp, #-52] ; 0xffffffcc
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Russell King [Thu, 12 Jul 2018 20:50:41 +0000 (21:50 +0100)]
ARM: net: bpf: improve 64-bit load immediate implementation
Rather than writing each 32-bit half of the 64-bit immediate value
separately when the register is on the stack:
movw r6, #45056 ; 0xb000
movt r6, #60979 ; 0xee33
str r6, [fp, #-44] ; 0xffffffd4
mov r6, #0
str r6, [fp, #-40] ; 0xffffffd8
arrange to use the double-word store when available instead:
movw r6, #45056 ; 0xb000
movt r6, #60979 ; 0xee33
mov r7, #0
strd r6, [fp, #-44] ; 0xffffffd4
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Linus Walleij [Wed, 11 Jul 2018 19:32:45 +0000 (21:32 +0200)]
net: gemini: Indicate that we can handle jumboframes
The hardware supposedly handles frames up to 10236 bytes and
implements .ndo_change_mtu() so accept 10236 minus the ethernet
header for a VLAN tagged frame on the netdevices. Use
ETH_MIN_MTU as minimum MTU.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Wed, 11 Jul 2018 19:32:44 +0000 (21:32 +0200)]
net: gemini: Move main init to port
The initialization sequence for the ethernet, setting up
interrupt routing and such things, need to be done after
both the ports are clocked and reset. Before this the
config will not "take". Move the initialization to the
port probe function and keep track of init status in
the state.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Wed, 11 Jul 2018 19:32:43 +0000 (21:32 +0200)]
net: gemini: Allow multiple ports to instantiate
The code was not tested with two ports actually in use at
the same time. (I blame this on lack of actual hardware using
that feature.) Now after locating a system using both ports,
add necessary fix to make both ports come up.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>