Xin Long [Sun, 16 Jul 2023 21:09:19 +0000 (17:09 -0400)]
openvswitch: set IPS_CONFIRMED in tmpl status only when commit is set in conntrack
By not setting IPS_CONFIRMED in tmpl that allows the exp not to be removed
from the hashtable when lookup, we can simplify the exp processing code a
lot in openvswitch conntrack.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Xin Long [Sun, 16 Jul 2023 21:09:18 +0000 (17:09 -0400)]
net: sched: set IPS_CONFIRMED in tmpl status only when commit is set in act_ct
With the following flows, the packets will be dropped if OVS TC offload is
enabled.
'ip,ct_state=-trk,in_port=1 actions=ct(zone=1)'
'ip,ct_state=+trk+new+rel,in_port=1 actions=ct(commit,zone=1)'
'ip,ct_state=+trk+new+rel,in_port=1 actions=ct(commit,zone=2),normal'
In the 1st flow, it finds the exp from the hashtable and removes it then
creates the ct with this exp in act_ct. However, in the 2nd flow it goes
to the OVS upcall at the 1st time. When the skb comes back from userspace,
it has to create the ct again without exp(the exp was removed last time).
With no 'rel' set in the ct, the 3rd flow can never get matched.
In OVS conntrack, it works around it by adding its own exp lookup function
ovs_ct_expect_find() where it doesn't remove the exp. Instead of creating
a real ct, it only updates its keys with the exp and its master info. So
when the skb comes back, the exp is still in the hashtable.
However, we can't do this trick in act_ct, as tc flower match is using a
real ct, and passing the exp and its master info to flower parsing via
tc_skb_cb is also not possible (tc_skb_cb size is not big enough).
The simple and clear fix is to not remove the exp at the 1st flow, namely,
not set IPS_CONFIRMED in tmpl when commit is not set in act_ct.
Reported-by: Shuang Li <shuali@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Reviewed-by: Davide Caratti <dcaratti@redhat.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Xin Long [Sun, 16 Jul 2023 21:09:17 +0000 (17:09 -0400)]
netfilter: allow exp not to be removed in nf_ct_find_expectation
Currently nf_conntrack_in() calling nf_ct_find_expectation() will
remove the exp from the hash table. However, in some scenario, we
expect the exp not to be removed when the created ct will not be
confirmed, like in OVS and TC conntrack in the following patches.
This patch allows exp not to be removed by setting IPS_CONFIRMED
in the status of the tmpl.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Eric Dumazet [Tue, 18 Jul 2023 16:20:49 +0000 (16:20 +0000)]
tcp: tcp_enter_quickack_mode() should be static
After commit
d2ccd7bc8acd ("tcp: avoid resetting ACK timer in DCTCP"),
tcp_enter_quickack_mode() is only used from net/ipv4/tcp_input.c.
Fixes:
d2ccd7bc8acd ("tcp: avoid resetting ACK timer in DCTCP")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Link: https://lore.kernel.org/r/20230718162049.1444938-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 18 Jul 2023 16:16:20 +0000 (16:16 +0000)]
tcp: remove tcp_send_partial()
This function does not exist.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20230718161620.1391951-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Paolo Abeni [Tue, 18 Jul 2023 14:38:09 +0000 (16:38 +0200)]
udp: use indirect call wrapper for data ready()
In most cases UDP sockets use the default data ready callback.
Leverage the indirect call wrapper for such callback to avoid an
indirect call in fastpath.
The above gives small but measurable performance gain under UDP flood.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/r/d47d53e6f8ee7a11228ca2f025d6243cc04b77f3.1689691004.git.pabeni@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Thu, 20 Jul 2023 03:59:20 +0000 (20:59 -0700)]
Merge branch 'clean-up-the-fec-driver'
Wei Fang says:
====================
clean up the FEC driver
When reading the codes of the FEC driver recently, I found there are
some redundant or invalid codes, these codes make the FEC driver a
bit messy and not concise, so this patch set has cleaned up the FEC
driver. At present, I only found these, but I believe these are not
all, I will continue to clean up the FEC driver in the future.
====================
Link: https://lore.kernel.org/r/20230718090928.2654347-1-wei.fang@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wei Fang [Tue, 18 Jul 2023 09:09:28 +0000 (17:09 +0800)]
net: fec: remove unused members from struct fec_enet_private
Three members of struct fec_enet_private have not been used since
they were first introduced into the FEC driver (commit
6605b730c061
("FEC: Add time stamping code and a PTP hardware clock")). Namely,
last_overflow_check, rx_hwtstamp_filter and base_incval. These
unused members make the struct fec_enet_private a bit messy and
might confuse the readers. There is no reason to keep them in the
FEC driver any longer.
Signed-off-by: Wei Fang <wei.fang@nxp.com>
Link: https://lore.kernel.org/r/20230718090928.2654347-4-wei.fang@nxp.com
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wei Fang [Tue, 18 Jul 2023 09:09:27 +0000 (17:09 +0800)]
net: fec: remove fec_set_mac_address() from fec_enet_init()
The fec_enet_init() is only invoked when the FEC driver probes, and
the network device of FEC is not been brought up at this moment. So
the fec_set_mac_address() does nothing and just returns zero when it
is invoked in the fec_enet_init(). Actually, the MAC address is set
into the hardware through fec_restart() which is also called in the
fec_enet_init().
Signed-off-by: Wei Fang <wei.fang@nxp.com>
Link: https://lore.kernel.org/r/20230718090928.2654347-3-wei.fang@nxp.com
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wei Fang [Tue, 18 Jul 2023 09:09:26 +0000 (17:09 +0800)]
net: fec: remove the remaining code of rx copybreak
Since the commit
95698ff6177b ("net: fec: using page pool to manage
RX buffers") has been applied, all the rx packets, no matter small
packets or large packets are put directly into the kernel networking
buffers. That is to say, the rx copybreak function has been removed
since then, but the related code has not been completely cleaned up.
So the purpose of this patch is to clean up the remaining related
code of rx copybreak.
Signed-off-by: Wei Fang <wei.fang@nxp.com>
Link: https://lore.kernel.org/r/20230718090928.2654347-2-wei.fang@nxp.com
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eugen Hristev [Tue, 18 Jul 2023 09:09:14 +0000 (12:09 +0300)]
dt-bindings: net: rockchip-dwmac: add default 'input' for clock_in_out
'clock_in_out' property is optional, and it can be one of two enums.
The binding does not specify what is the behavior when the property is
missing altogether.
Hence, add a default value that the driver can use.
Signed-off-by: Eugen Hristev <eugen.hristev@collabora.com>
Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Acked-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20230718090914.282293-1-eugen.hristev@collabora.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Thu, 20 Jul 2023 03:51:13 +0000 (20:51 -0700)]
Merge branch 'net-stmmac-improve-driver-statistics'
Jisheng Zhang says:
====================
net: stmmac: improve driver statistics
improve the stmmac driver statistics:
1. don't clear network driver statistics in .ndo_close() and
.ndo_open() cycle
2. avoid some network driver statistics overflow on 32 bit platforms
3. use per-queue statistics where necessary to remove frequent
cacheline ping pongs.
====================
Link: https://lore.kernel.org/r/20230717160630.1892-1-jszhang@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jisheng Zhang [Mon, 17 Jul 2023 16:06:30 +0000 (00:06 +0800)]
net: stmmac: use per-queue 64 bit statistics where necessary
Currently, there are two major issues with stmmac driver statistics
First of all, statistics in stmmac_extra_stats, stmmac_rxq_stats
and stmmac_txq_stats are 32 bit variables on 32 bit platforms. This
can cause some stats to overflow after several minutes of
high traffic, for example rx_pkt_n, tx_pkt_n and so on.
Secondly, if HW supports multiqueues, there are frequent cacheline
ping pongs on some driver statistic vars, for example, normal_irq_n,
tx_pkt_n and so on. What's more, frequent cacheline ping pongs on
normal_irq_n happens in ISR, this makes the situation worse.
To improve the driver, we convert those statistics to 64 bit, implement
ndo_get_stats64 and update .get_ethtool_stats implementation
accordingly. We also use per-queue statistics where necessary to remove
the cacheline ping pongs as much as possible to make multiqueue
operations faster. Those statistics which are not possible to overflow
and not frequently updated are kept as is.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Link: https://lore.kernel.org/r/20230717160630.1892-3-jszhang@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jisheng Zhang [Mon, 17 Jul 2023 16:06:29 +0000 (00:06 +0800)]
net: stmmac: don't clear network statistics in .ndo_open()
FWICT, the common style in other network drivers: the network
statistics are not cleared since initialization, follow the common
style for stmmac.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Link: https://lore.kernel.org/r/20230717160630.1892-2-jszhang@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Thu, 20 Jul 2023 03:36:15 +0000 (20:36 -0700)]
Merge tag 'ipsec-next-2023-07-19' of git://git./linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2023-07-19
Just a leftover from the last development cycle:
1) delete a clear to zero of encap_oa, it is not needed anymore.
From Leon Romanovsky.
* tag 'ipsec-next-2023-07-19' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next:
xfrm: delete not-needed clear to zero of encap_oa
====================
Link: https://lore.kernel.org/r/20230719100228.373055-1-steffen.klassert@secunet.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 19 Jul 2023 22:02:17 +0000 (15:02 -0700)]
Merge tag 'for-netdev' of https://git./linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
====================
pull-request: bpf-next 2023-07-19
We've added 45 non-merge commits during the last 3 day(s) which contain
a total of 71 files changed, 7808 insertions(+), 592 deletions(-).
The main changes are:
1) multi-buffer support in AF_XDP, from Maciej Fijalkowski,
Magnus Karlsson, Tirthendu Sarkar.
2) BPF link support for tc BPF programs, from Daniel Borkmann.
3) Enable bpf_map_sum_elem_count kfunc for all program types,
from Anton Protopopov.
4) Add 'owner' field to bpf_rb_node to fix races in shared ownership,
Dave Marchevsky.
5) Prevent potential skb_header_pointer() misuse, from Alexei Starovoitov.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (45 commits)
bpf, net: Introduce skb_pointer_if_linear().
bpf: sync tools/ uapi header with
selftests/bpf: Add mprog API tests for BPF tcx links
selftests/bpf: Add mprog API tests for BPF tcx opts
bpftool: Extend net dump with tcx progs
libbpf: Add helper macro to clear opts structs
libbpf: Add link-based API for tcx
libbpf: Add opts-based attach/detach/query API for tcx
bpf: Add fd-based tcx multi-prog infra with link support
bpf: Add generic attach/detach/query API for multi-progs
selftests/xsk: reset NIC settings to default after running test suite
selftests/xsk: add test for too many frags
selftests/xsk: add metadata copy test for multi-buff
selftests/xsk: add invalid descriptor test for multi-buffer
selftests/xsk: add unaligned mode test for multi-buffer
selftests/xsk: add basic multi-buffer test
selftests/xsk: transmit and receive multi-buffer packets
xsk: add multi-buffer documentation
i40e: xsk: add TX multi-buffer support
ice: xsk: Tx multi-buffer support
...
====================
Link: https://lore.kernel.org/r/20230719175424.75717-1-alexei.starovoitov@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 19 Jul 2023 21:31:27 +0000 (14:31 -0700)]
Merge tag 'linux-can-next-for-6.6-
20230719' of git://git./linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2023-07-19
The first 2 patches are by Judith Mendez, target the m_can driver and
add hrtimer based polling support for TI AM62x SoCs, where the
interrupt of the MCU domain's m_can cores is not routed to the Cortex
A53 core.
A patch by Rob Herring converts the grcan driver to use the correct DT
include files.
Michal Simek and Srinivas Neeli add support for optional reset control
to the xilinx_can driver.
The next 2 patches are by Jimmy Assarsson and add support for new
Kvaser pciefd to the kvaser_pciefd driver.
Mao Zhu's patch for the ucan driver removes a repeated word from a
comment.
* tag 'linux-can-next-for-6.6-
20230719' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next:
can: ucan: Remove repeated word
can: kvaser_pciefd: Add support for new Kvaser pciefd devices
can: kvaser_pciefd: Move hardware specific constants and functions into a driver_data struct
can: Explicitly include correct DT includes
can: xilinx_can: Add support for controller reset
dt-bindings: can: xilinx_can: Add reset description
can: m_can: Add hrtimer to generate software interrupt
dt-bindings: net: can: Remove interrupt properties for MCAN
====================
Link: https://lore.kernel.org/r/20230719072348.525039-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Alexei Starovoitov [Tue, 18 Jul 2023 23:40:21 +0000 (16:40 -0700)]
bpf, net: Introduce skb_pointer_if_linear().
Network drivers always call skb_header_pointer() with non-null buffer.
Remove !buffer check to prevent accidental misuse of skb_header_pointer().
Introduce skb_pointer_if_linear() instead.
Reported-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/r/20230718234021.43640-1-alexei.starovoitov@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alan Maguire [Wed, 19 Jul 2023 16:22:57 +0000 (17:22 +0100)]
bpf: sync tools/ uapi header with
Seeing the following:
Warning: Kernel ABI header at 'tools/include/uapi/linux/bpf.h' differs from latest version at 'include/uapi/linux/bpf.h'
...so sync tools version missing some list_node/rb_tree fields.
Fixes:
c3c510ce431c ("bpf: Add 'owner' field to bpf_{list,rb}_node")
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Link: https://lore.kernel.org/r/20230719162257.20818-1-alan.maguire@oracle.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Wed, 19 Jul 2023 17:07:28 +0000 (10:07 -0700)]
Merge branch 'bpf-link-support-for-tc-bpf-programs'
Daniel Borkmann says:
====================
BPF link support for tc BPF programs
This series adds BPF link support for tc BPF programs. We initially
presented the motivation, related work and design at last year's LPC
conference in the networking & BPF track [0], and a recent update on
our progress of the rework during this year's LSF/MM/BPF summit [1].
The main changes are in first two patches and the last two have an
extensive batch of test cases we developed along with it, please see
individual patches for details. We tested this series with tc-testing
selftest suite as well as BPF CI/selftests. Thanks!
v5 -> v6:
- Remove export symbol on tcx_inc/dec (Jakub)
- Treat fd==0 as invalid (Stan, Alexei)
v4 -> v5:
- Updated bpftool docs and usage of bpftool net (Quentin)
- Consistent dump "prog id"/"link id" -> "prog_id"/"link_id" (Quentin)
- Reworked bpftool flag output handling (Quentin)
- LIBBPF_OPTS_RESET() macro with varargs for reinit (Andrii)
- libbpf opts/link bail out on relative_fd && relative_id (Andrii)
- libbpf improvements for assigning attr.relative_{id,fd} (Andrii)
- libbpf sorting in libbpf.map (Andrii)
- libbpf move ifindex to bpf_program__attach_tcx param (Andrii)
- libbpf move BPF_F_ID flag handling to bpf_link_create (Andrii)
- bpf_program_attach_fd with tcx instead of tc (Andrii)
- Reworking kernel-internal bpf_mprog API (Alexei, Andrii)
- Change "object" notation to "id_or_fd" (Andrii)
- Remove on stack cpp[BPF_MPROG_MAX] and switch to memmove (Andrii)
- Simplify bpf_mprog_{insert,delete} and add comment on internals
- Get rid of BPF_MPROG_* return codes (Alexei, Andrii)
v3 -> v4:
- Fix bpftool output to display tcx/{ingress,egress} (Stan)
- Documentation around API, BPF_MPROG_* return codes and locking
expectations (Stan, Alexei)
- Change _after and _before to have the same semantics for return
value (Alexei)
- Rework mprog initialization and move allocation/free one layer
up into tcx to simplify the code (Stan)
- Add comment on synchronize_rcu and parent->ref (Stan)
- Add comment on bpf_mprog_pos_() helpers wrt target position (Stan)
v2 -> v3:
- Removal of BPF_F_FIRST/BPF_F_LAST from control UAPI (Toke, Stan)
- Along with that full rework of bpf_mprog internals to simplify
dependency management, looks much nicer now imho
- Just single bpf_mprog_cp instead of two (Andrii)
- atomic64_t for revision counter (Andrii)
- Evaluate target position and reject on conflicts (Andrii)
- Keep track of actual count in bpf_mprob_bundle (Andrii)
- Make combo of REPLACE and BEFORE/AFTER work (Andrii)
- Moved miniq as first struct member (Jamal)
- Rework tcx_link_attach with regards to rtnl (Jakub, Andrii)
- Moved wrappers after bpf_prog_detach_ops (Andrii)
- Removed union for relative_fd and friends for opts and link in
libbpf (Andrii)
- Add doc comments to attach/detach/query libbpf APIs (Andrii)
- Dropped SEC_ATTACHABLE_OPT (Andrii)
- Add an OPTS_ZEROED check to bpf_link_create (Andrii)
- Keep opts as the last argument in bpf_program_attach_fd (Andrii)
- Rework bpf_program_attach_fd (Andrii)
- Remove OPTS_GET before we checked OPTS_VALID in
bpf_program__attach_tcx (Andrii)
- Add `size_t :0;` to prevent compiler from leaving garbage (Andrii)
- Add helper macro to clear opts structs which I found useful
when writing tests
- Rework of both opts and link test cases to accommodate for changes
v1 -> v2:
- Rework of almost entire series to remove prio from UAPI and switch
to better control directives BPF_F_FIRST/BPF_F_LAST/BPF_F_BEFORE/
BPF_F_AFTER (Alexei, Toke, Stan, Andrii)
- Addition of big test suite to cover all corner cases
[0] https://lpc.events/event/16/contributions/1353/
[1] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf
====================
Link: https://lore.kernel.org/r/20230719140858.13224-1-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:58 +0000 (16:08 +0200)]
selftests/bpf: Add mprog API tests for BPF tcx links
Add a big batch of test coverage to assert all aspects of the tcx link API:
# ./vmtest.sh -- ./test_progs -t tc_links
[...]
#225 tc_links_after:OK
#226 tc_links_append:OK
#227 tc_links_basic:OK
#228 tc_links_before:OK
#229 tc_links_chain_classic:OK
#230 tc_links_dev_cleanup:OK
#231 tc_links_invalid:OK
#232 tc_links_prepend:OK
#233 tc_links_replace:OK
#234 tc_links_revision:OK
Summary: 10/0 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230719140858.13224-9-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:57 +0000 (16:08 +0200)]
selftests/bpf: Add mprog API tests for BPF tcx opts
Add a big batch of test coverage to assert all aspects of the tcx opts
attach, detach and query API:
# ./vmtest.sh -- ./test_progs -t tc_opts
[...]
#238 tc_opts_after:OK
#239 tc_opts_append:OK
#240 tc_opts_basic:OK
#241 tc_opts_before:OK
#242 tc_opts_chain_classic:OK
#243 tc_opts_demixed:OK
#244 tc_opts_detach:OK
#245 tc_opts_detach_after:OK
#246 tc_opts_detach_before:OK
#247 tc_opts_dev_cleanup:OK
#248 tc_opts_invalid:OK
#249 tc_opts_mixed:OK
#250 tc_opts_prepend:OK
#251 tc_opts_replace:OK
#252 tc_opts_revision:OK
Summary: 15/0 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230719140858.13224-8-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:56 +0000 (16:08 +0200)]
bpftool: Extend net dump with tcx progs
Add support to dump fd-based attach types via bpftool. This includes both
the tc BPF link and attach ops programs. Dumped information contain the
attach location, function entry name, program ID and link ID when applicable.
Example with tc BPF link:
# ./bpftool net
xdp:
tc:
bond0(4) tcx/ingress cil_from_netdev prog_id 784 link_id 10
bond0(4) tcx/egress cil_to_netdev prog_id 804 link_id 11
flow_dissector:
netfilter:
Example with tc BPF attach ops:
# ./bpftool net
xdp:
tc:
bond0(4) tcx/ingress cil_from_netdev prog_id 654
bond0(4) tcx/egress cil_to_netdev prog_id 672
flow_dissector:
netfilter:
Currently, permanent flags are not yet supported, so 'unknown' ones are dumped
via NET_DUMP_UINT_ONLY() and once we do have permanent ones, we dump them as
human readable string.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/r/20230719140858.13224-7-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:55 +0000 (16:08 +0200)]
libbpf: Add helper macro to clear opts structs
Add a small and generic LIBBPF_OPTS_RESET() helper macros which clears an
opts structure and reinitializes its .sz member to place the structure
size. Additionally, the user can pass option-specific data to reinitialize
via varargs.
I found this very useful when developing selftests, but it is also generic
enough as a macro next to the existing LIBBPF_OPTS() which hides the .sz
initialization, too.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230719140858.13224-6-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:54 +0000 (16:08 +0200)]
libbpf: Add link-based API for tcx
Implement tcx BPF link support for libbpf.
The bpf_program__attach_fd() API has been refactored slightly in order to pass
bpf_link_create_opts pointer as input.
A new bpf_program__attach_tcx() has been added on top of this which allows for
passing all relevant data via extensible struct bpf_tcx_opts.
The program sections tcx/ingress and tcx/egress correspond to the hook locations
for tc ingress and egress, respectively.
For concrete usage examples, see the extensive selftests that have been
developed as part of this series.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230719140858.13224-5-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:53 +0000 (16:08 +0200)]
libbpf: Add opts-based attach/detach/query API for tcx
Extend libbpf attach opts and add a new detach opts API so this can be used
to add/remove fd-based tcx BPF programs. The old-style bpf_prog_detach() and
bpf_prog_detach2() APIs are refactored to reuse the new bpf_prog_detach_opts()
internally.
The bpf_prog_query_opts() API got extended to be able to handle the new
link_ids, link_attach_flags and revision fields.
For concrete usage examples, see the extensive selftests that have been
developed as part of this series.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230719140858.13224-4-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:52 +0000 (16:08 +0200)]
bpf: Add fd-based tcx multi-prog infra with link support
This work refactors and adds a lightweight extension ("tcx") to the tc BPF
ingress and egress data path side for allowing BPF program management based
on fds via bpf() syscall through the newly added generic multi-prog API.
The main goal behind this work which we also presented at LPC [0] last year
and a recent update at LSF/MM/BPF this year [3] is to support long-awaited
BPF link functionality for tc BPF programs, which allows for a model of safe
ownership and program detachment.
Given the rise in tc BPF users in cloud native environments, this becomes
necessary to avoid hard to debug incidents either through stale leftover
programs or 3rd party applications accidentally stepping on each others toes.
As a recap, a BPF link represents the attachment of a BPF program to a BPF
hook point. The BPF link holds a single reference to keep BPF program alive.
Moreover, hook points do not reference a BPF link, only the application's
fd or pinning does. A BPF link holds meta-data specific to attachment and
implements operations for link creation, (atomic) BPF program update,
detachment and introspection. The motivation for BPF links for tc BPF programs
is multi-fold, for example:
- From Meta: "It's especially important for applications that are deployed
fleet-wide and that don't "control" hosts they are deployed to. If such
application crashes and no one notices and does anything about that, BPF
program will keep running draining resources or even just, say, dropping
packets. We at FB had outages due to such permanent BPF attachment
semantics. With fd-based BPF link we are getting a framework, which allows
safe, auto-detachable behavior by default, unless application explicitly
opts in by pinning the BPF link." [1]
- From Cilium-side the tc BPF programs we attach to host-facing veth devices
and phys devices build the core datapath for Kubernetes Pods, and they
implement forwarding, load-balancing, policy, EDT-management, etc, within
BPF. Currently there is no concept of 'safe' ownership, e.g. we've recently
experienced hard-to-debug issues in a user's staging environment where
another Kubernetes application using tc BPF attached to the same prio/handle
of cls_bpf, accidentally wiping all Cilium-based BPF programs from underneath
it. The goal is to establish a clear/safe ownership model via links which
cannot accidentally be overridden. [0,2]
BPF links for tc can co-exist with non-link attachments, and the semantics are
in line also with XDP links: BPF links cannot replace other BPF links, BPF
links cannot replace non-BPF links, non-BPF links cannot replace BPF links and
lastly only non-BPF links can replace non-BPF links. In case of Cilium, this
would solve mentioned issue of safe ownership model as 3rd party applications
would not be able to accidentally wipe Cilium programs, even if they are not
BPF link aware.
Earlier attempts [4] have tried to integrate BPF links into core tc machinery
to solve cls_bpf, which has been intrusive to the generic tc kernel API with
extensions only specific to cls_bpf and suboptimal/complex since cls_bpf could
be wiped from the qdisc also. Locking a tc BPF program in place this way, is
getting into layering hacks given the two object models are vastly different.
We instead implemented the tcx (tc 'express') layer which is an fd-based tc BPF
attach API, so that the BPF link implementation blends in naturally similar to
other link types which are fd-based and without the need for changing core tc
internal APIs. BPF programs for tc can then be successively migrated from classic
cls_bpf to the new tc BPF link without needing to change the program's source
code, just the BPF loader mechanics for attaching is sufficient.
For the current tc framework, there is no change in behavior with this change
and neither does this change touch on tc core kernel APIs. The gist of this
patch is that the ingress and egress hook have a lightweight, qdisc-less
extension for BPF to attach its tc BPF programs, in other words, a minimal
entry point for tc BPF. The name tcx has been suggested from discussion of
earlier revisions of this work as a good fit, and to more easily differ between
the classic cls_bpf attachment and the fd-based one.
For the ingress and egress tcx points, the device holds a cache-friendly array
with program pointers which is separated from control plane (slow-path) data.
Earlier versions of this work used priority to determine ordering and expression
of dependencies similar as with classic tc, but it was challenged that for
something more future-proof a better user experience is required. Hence this
resulted in the design and development of the generic attach/detach/query API
for multi-progs. See prior patch with its discussion on the API design. tcx is
the first user and later we plan to integrate also others, for example, one
candidate is multi-prog support for XDP which would benefit and have the same
'look and feel' from API perspective.
The goal with tcx is to have maximum compatibility to existing tc BPF programs,
so they don't need to be rewritten specifically. Compatibility to call into
classic tcf_classify() is also provided in order to allow successive migration
or both to cleanly co-exist where needed given its all one logical tc layer and
the tcx plus classic tc cls/act build one logical overall processing pipeline.
tcx supports the simplified return codes TCX_NEXT which is non-terminating (go
to next program) and terminating ones with TCX_PASS, TCX_DROP, TCX_REDIRECT.
The fd-based API is behind a static key, so that when unused the code is also
not entered. The struct tcx_entry's program array is currently static, but
could be made dynamic if necessary at a point in future. The a/b pair swap
design has been chosen so that for detachment there are no allocations which
otherwise could fail.
The work has been tested with tc-testing selftest suite which all passes, as
well as the tc BPF tests from the BPF CI, and also with Cilium's L4LB.
Thanks also to Nikolay Aleksandrov and Martin Lau for in-depth early reviews
of this work.
[0] https://lpc.events/event/16/contributions/1353/
[1] https://lore.kernel.org/bpf/CAEf4BzbokCJN33Nw_kg82sO=xppXnKWEncGTWCTB9vGCmLB6pw@mail.gmail.com
[2] https://colocatedeventseu2023.sched.com/event/1Jo6O/tales-from-an-ebpf-programs-murder-mystery-hemanth-malla-guillaume-fournier-datadog
[3] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf
[4] https://lore.kernel.org/bpf/
20210604063116.234316-1-memxor@gmail.com
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/r/20230719140858.13224-3-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Wed, 19 Jul 2023 14:08:51 +0000 (16:08 +0200)]
bpf: Add generic attach/detach/query API for multi-progs
This adds a generic layer called bpf_mprog which can be reused by different
attachment layers to enable multi-program attachment and dependency resolution.
In-kernel users of the bpf_mprog don't need to care about the dependency
resolution internals, they can just consume it with few API calls.
The initial idea of having a generic API sparked out of discussion [0] from an
earlier revision of this work where tc's priority was reused and exposed via
BPF uapi as a way to coordinate dependencies among tc BPF programs, similar
as-is for classic tc BPF. The feedback was that priority provides a bad user
experience and is hard to use [1], e.g.:
I cannot help but feel that priority logic copy-paste from old tc, netfilter
and friends is done because "that's how things were done in the past". [...]
Priority gets exposed everywhere in uapi all the way to bpftool when it's
right there for users to understand. And that's the main problem with it.
The user don't want to and don't need to be aware of it, but uapi forces them
to pick the priority. [...] Your cover letter [0] example proves that in
real life different service pick the same priority. They simply don't know
any better. Priority is an unnecessary magic that apps _have_ to pick, so
they just copy-paste and everyone ends up using the same.
The course of the discussion showed more and more the need for a generic,
reusable API where the "same look and feel" can be applied for various other
program types beyond just tc BPF, for example XDP today does not have multi-
program support in kernel, but also there was interest around this API for
improving management of cgroup program types. Such common multi-program
management concept is useful for BPF management daemons or user space BPF
applications coordinating internally about their attachments.
Both from Cilium and Meta side [2], we've collected the following requirements
for a generic attach/detach/query API for multi-progs which has been implemented
as part of this work:
- Support prog-based attach/detach and link API
- Dependency directives (can also be combined):
- BPF_F_{BEFORE,AFTER} with relative_{fd,id} which can be {prog,link,none}
- BPF_F_ID flag as {fd,id} toggle; the rationale for id is so that user
space application does not need CAP_SYS_ADMIN to retrieve foreign fds
via bpf_*_get_fd_by_id()
- BPF_F_LINK flag as {prog,link} toggle
- If relative_{fd,id} is none, then BPF_F_BEFORE will just prepend, and
BPF_F_AFTER will just append for attaching
- Enforced only at attach time
- BPF_F_REPLACE with replace_bpf_fd which can be prog, links have their
own infra for replacing their internal prog
- If no flags are set, then it's default append behavior for attaching
- Internal revision counter and optionally being able to pass expected_revision
- User space application can query current state with revision, and pass it
along for attachment to assert current state before doing updates
- Query also gets extension for link_ids array and link_attach_flags:
- prog_ids are always filled with program IDs
- link_ids are filled with link IDs when link was used, otherwise 0
- {prog,link}_attach_flags for holding {prog,link}-specific flags
- Must be easy to integrate/reuse for in-kernel users
The uapi-side changes needed for supporting bpf_mprog are rather minimal,
consisting of the additions of the attachment flags, revision counter, and
expanding existing union with relative_{fd,id} member.
The bpf_mprog framework consists of an bpf_mprog_entry object which holds
an array of bpf_mprog_fp (fast-path structure). The bpf_mprog_cp (control-path
structure) is part of bpf_mprog_bundle. Both have been separated, so that
fast-path gets efficient packing of bpf_prog pointers for maximum cache
efficiency. Also, array has been chosen instead of linked list or other
structures to remove unnecessary indirections for a fast point-to-entry in
tc for BPF.
The bpf_mprog_entry comes as a pair via bpf_mprog_bundle so that in case of
updates the peer bpf_mprog_entry is populated and then just swapped which
avoids additional allocations that could otherwise fail, for example, in
detach case. bpf_mprog_{fp,cp} arrays are currently static, but they could
be converted to dynamic allocation if necessary at a point in future.
Locking is deferred to the in-kernel user of bpf_mprog, for example, in case
of tcx which uses this API in the next patch, it piggybacks on rtnl.
An extensive test suite for checking all aspects of this API for prog-based
attach/detach and link API comes as BPF selftests in this series.
Thanks also to Andrii Nakryiko for early API discussions wrt Meta's BPF prog
management.
[0] https://lore.kernel.org/bpf/
20221004231143.19190-1-daniel@iogearbox.net
[1] https://lore.kernel.org/bpf/CAADnVQ+gEY3FjCR=+DmjDR4gp5bOYZUFJQXj4agKFHT9CQPZBw@mail.gmail.com
[2] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20230719140858.13224-2-daniel@iogearbox.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Wed, 19 Jul 2023 16:56:51 +0000 (09:56 -0700)]
Merge branch 'xsk-multi-buffer-support'
Maciej Fijalkowski says:
====================
xsk: multi-buffer support
v6->v7:
- rebase...[Alexei]
v5->v6:
- update bpf_xdp_query_opts__last_field in patch 10 [Alexei]
v4->v5:
- align options argument size to match options from xdp_desc [Benjamin]
- cleanup skb from xdp_sock on socket termination [Toke]
- introduce new netlink attribute for letting user space know about Tx
frag limit; this substitutes xdp_features flag previously dedicated
for setting ZC multi-buffer support [Toke, Jakub]
- include i40e ZC multi-buffer support
- enable TOO_MANY_FRAGS for ZC on xskxceiver; this is now possible due
to netlink attribute mentioned two bullets above
v3->v4:
-rely on ynl for adding new xdp_features flag [Jakub]
- move xskb_list to xsk_buff_pool
v2->v3:
- Fix issue with the next valid packet getting dropped after an invalid
packet with MAX_SKB_FRAGS + 1 frags [Magnus]
- query NETDEV_XDP_ACT_ZC_SG flag within xskxceiver and act on it
- remove redundant include in xsk.c [kernel test robot]
- s/NETDEV_XDP_ACT_NDO_ZC_SG/NETDEV_XDP_ACT_ZC_SG + kernel doc [Magnus,
Simon]
v1->v2:
- fix spelling issues in commit messages [Simon]
- remove XSK_DESC_MAX_FRAGS, use MAX_SKB_FRAGS instead [Stan, Alexei]
- add documentation patch
- fix build error from kernel test robot on patch 10
This series of patches add multi-buffer support for AF_XDP. XDP and
various NIC drivers already have support for multi-buffer packets. With
this patch set, programs using AF_XDP sockets can now also receive and
transmit multi-buffer packets both in copy as well as zero-copy mode.
ZC multi-buffer implementation is based on ice driver.
Some definitions to put us all on the same page:
* A packet consists of one or more frames
* A descriptor in one of the AF_XDP rings always refers to a single
frame. In the case the packet consists of a single frame, the
descriptor refers to the whole packet.
To represent a packet consisting of multiple frames, we introduce a
new flag called XDP_PKT_CONTD in the options field of the Rx and Tx
descriptors. If it is true (1) the packet continues with the next
descriptor and if it is false (0) it means this is the last descriptor
of the packet. Why the reverse logic of end-of-packet (eop) flag found
in many NICs? Just to preserve compatibility with non-multi-buffer
applications that have this bit set to false for all packets on Rx, and
the apps set the options field to zero for Tx, as anything else will
be treated as an invalid descriptor.
These are the semantics for producing packets onto XSK Tx ring
consisting of multiple frames:
* When an invalid descriptor is found, all the other
descriptors/frames of this packet are marked as invalid and not
completed. The next descriptor is treated as the start of a new
packet, even if this was not the intent (because we cannot guess
the intent). As before, if your program is producing invalid
descriptors you have a bug that must be fixed.
* Zero length descriptors are treated as invalid descriptors.
* For copy mode, the maximum supported number of frames in a packet is
equal to CONFIG_MAX_SKB_FRAGS + 1. If it is exceeded, all
descriptors accumulated so far are dropped and treated as
invalid. To produce an application that will work on any system
regardless of this config setting, limit the number of frags to 18,
as the minimum value of the config is 17.
* For zero-copy mode, the limit is up to what the NIC HW
supports. User space can discover this via newly introduced
NETDEV_A_DEV_XDP_ZC_MAX_SEGS netlink attribute.
Here is an example Tx path pseudo-code (using libxdp interfaces for
simplicity) ignoring that the umem is finite in size, and that we
eventually will run out of packets to send. Also assumes pkts.addr
points to a valid location in the umem.
void tx_packets(struct xsk_socket_info *xsk, struct pkt *pkts,
int batch_size)
{
u32 idx, i, pkt_nb = 0;
xsk_ring_prod__reserve(&xsk->tx, batch_size, &idx);
for (i = 0; i < batch_size;) {
u64 addr = pkts[pkt_nb].addr;
u32 len = pkts[pkt_nb].size;
do {
struct xdp_desc *tx_desc;
tx_desc = xsk_ring_prod__tx_desc(&xsk->tx, idx + i++);
tx_desc->addr = addr;
if (len > xsk_frame_size) {
tx_desc->len = xsk_frame_size;
tx_desc->options |= XDP_PKT_CONTD;
} else {
tx_desc->len = len;
tx_desc->options = 0;
pkt_nb++;
}
len -= tx_desc->len;
addr += xsk_frame_size;
if (i == batch_size) {
/* Remember len, addr, pkt_nb for next
* iteration. Skipped for simplicity.
*/
break;
}
} while (len);
}
xsk_ring_prod__submit(&xsk->tx, i);
}
On the Rx path in copy mode, the xsk core copies the XDP data into
multiple descriptors, if needed, and sets the XDP_PKT_CONTD flag as
detailed before. Zero-copy mode in order to avoid the copies has to
maintain a chain of xdp_buff_xsk structs that represent whole packet.
This is because what actually is redirected is the xdp_buff and we
currently have no equivalent mechanism that is used for copy mode
(embedded skb_shared_info in xdp_buff) to carry the frags. This means
xdp_buff_xsk grows in size but these members are at the end and should
not be touched when data path is not dealing with fragmented packets.
This solution kept us within assumed performance impact, hence we
decided to proceed with it.
When the application gets a descriptor with the
XDP_PKT_CONTD flag set to one, it means that the packet consists of
multiple buffers and it continues with the next buffer in the following
descriptor. When a descriptor with XDP_PKT_CONTD == 0 is received, it
means that this is the last buffer of the packet. AF_XDP guarantees that
only a complete packet (all frames in the packet) is sent to the
application.
If application reads a batch of descriptors, using for example the libxdp
interfaces, it is not guaranteed that the batch will end with a full
packet. It might end in the middle of a packet and the rest of the
buffers of that packet will arrive at the beginning of the next batch,
since the libxdp interface does not read the whole ring (unless you
have an enormous batch size or a very small ring size).
Here is a simple Rx path pseudo-code example (using libxdp interfaces for
simplicity). Error paths have been excluded for simplicity:
void rx_packets(struct xsk_socket_info *xsk)
{
static bool new_packet = true;
u32 idx_rx = 0, idx_fq = 0;
static char *pkt;
int rcvd = xsk_ring_cons__peek(&xsk->rx, opt_batch_size, &idx_rx);
xsk_ring_prod__reserve(&xsk->umem->fq, rcvd, &idx_fq);
for (int i = 0; i < rcvd; i++) {
struct xdp_desc *desc = xsk_ring_cons__rx_desc(&xsk->rx, idx_rx++);
char *frag = xsk_umem__get_data(xsk->umem->buffer, desc->addr);
bool eop = !(desc->options & XDP_PKT_CONTD);
if (new_packet)
pkt = frag;
else
add_frag_to_pkt(pkt, frag);
if (eop)
process_pkt(pkt);
new_packet = eop;
*xsk_ring_prod__fill_addr(&xsk->umem->fq, idx_fq++) = desc->addr;
}
xsk_ring_prod__submit(&xsk->umem->fq, rcvd);
xsk_ring_cons__release(&xsk->rx, rcvd);
}
We had to introduce a new bind flag (XDP_USE_SG) on the AF_XDP level to
enable multi-buffer support. The reason we need to differentiate between
non multi-buffer and multi-buffer is the behaviour when the kernel gets
a packet that is larger than the frame size. Without multi-buffer, this
packet is dropped and marked in the stats. With multi-buffer on, we want
to split it up into multiple frames instead.
At the start, we thought that riding on the .frags section name of
the XDP program was a good idea. You do not have to introduce yet
another flag and all AF_XDP users must load an XDP program anyway
to get any traffic up to the socket, so why not just say that the XDP
program decides if the AF_XDP socket should get multi-buffer packets
or not? The problem is that we can create an AF_XDP socket that is Tx
only and that works without having to load an XDP program at
all. Another problem is that the XDP program might change during the
execution, so we would have to check this for every single packet.
Here is the observed throughput when compared to a codebase without any
multi-buffer changes and measured with xdpsock for 64B packets.
Apparently ZC Tx takes a hit from explicit zero length descriptors
validation. Overall, in terms of ZC performance, there is a room for
improvement, but for now we think this work is in a good shape in terms
of correctness and functionality. We were targetting for up to 5%
overhead though. Note that ZC performance drops come from core + driver
support being combined, whereas copy mode had already driver support in
place.
Mode rxdrop l2fwd txonly
ice-zc -4% -7% -6%
i40e-zc -7% -6% -7%
drv -1.2% 0% +2%
skb -0.6% -1% +2%
Thank you,
Tirthendu, Magnus and Maciej
====================
Link: https://lore.kernel.org/r/20230719132421.584801-1-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:21 +0000 (15:24 +0200)]
selftests/xsk: reset NIC settings to default after running test suite
Currently, when running ZC test suite, after finishing first run of test
suite and then switching to busy-poll tests within xskxceiver, such
errors are observed:
libbpf: Kernel error message: ice: MTU is too large for linear frames and XDP prog does not support frags
1..26
libbpf: Kernel error message: Native and generic XDP can't be active at the same time
Error attaching XDP program
not ok 1 [xskxceiver.c:xsk_reattach_xdp:1568]: ERROR: 17/"File exists"
this is because test suite ends with 9k MTU and native xdp program being
loaded. Busy-poll tests start non-multi-buffer tests for generic mode.
To fix this, let us introduce bash function that will reset NIC settings
to default (e.g. 1500 MTU and no xdp progs loaded) so that test suite
can continue without interrupts. It also means that after busy-poll
tests NIC will have those default settings, whereas right now it is left
with 9k MTU and xdp prog loaded in native mode.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-25-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:20 +0000 (15:24 +0200)]
selftests/xsk: add test for too many frags
Add a test that will exercise maximum number of supported fragments.
This number depends on mode of the test - for SKB and DRV it will be 18
whereas for ZC this is defined by a value from NETDEV_A_DEV_XDP_ZC_MAX_SEGS
netlink attribute.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # made use of new netlink attribute
Link: https://lore.kernel.org/r/20230719132421.584801-24-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:19 +0000 (15:24 +0200)]
selftests/xsk: add metadata copy test for multi-buff
Enable the already existing metadata copy test to also run in
multi-buffer mode with 9K packets.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-23-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:18 +0000 (15:24 +0200)]
selftests/xsk: add invalid descriptor test for multi-buffer
Add a test that produces lots of nasty descriptors testing the corner
cases of the descriptor validation. Some of these descriptors are
valid and some are not as indicated by the valid flag. For a
description of all the test combinations, please see the code.
To stress the API, we need to be able to generate combinations of
descriptors that make little sense. A new verbatim mode is introduced
for the packet_stream to accomplish this. In this mode, all packets in
the packet_stream are sent as is. We do not try to chop them up into
frames that are of the right size that we know are going to work as we
would normally do. The packets are just written into the Tx ring even
if we know they make no sense.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # adjusted valid flags for frags
Link: https://lore.kernel.org/r/20230719132421.584801-22-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:17 +0000 (15:24 +0200)]
selftests/xsk: add unaligned mode test for multi-buffer
Add a test for multi-buffer AF_XDP when using unaligned mode. The test
sends 4096 9K-buffers.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-21-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:16 +0000 (15:24 +0200)]
selftests/xsk: add basic multi-buffer test
Add the first basic multi-buffer test that sends a stream of 9K
packets and validates that they are received at the other end. In
order to enable sending and receiving multi-buffer packets, code that
sets the MTU is introduced as well as modifications to the XDP
programs so that they signal that they are multi-buffer enabled.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-20-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:15 +0000 (15:24 +0200)]
selftests/xsk: transmit and receive multi-buffer packets
Add the ability to send and receive packets that are larger than the
size of a umem frame, using the AF_XDP /XDP multi-buffer
support. There are three pieces of code that need to be changed to
achieve this: the Rx path, the Tx path, and the validation logic.
Both the Rx path and Tx could only deal with a single fragment per
packet. The Tx path is extended with a new function called
pkt_nb_frags() that can be used to retrieve the number of fragments a
packet will consume. We then create these many fragments in a loop and
fill the N-1 first ones to the max size limit to use the buffer space
efficiently, and the Nth one with whatever data that is left. This
goes on until we have filled in at the most BATCH_SIZE worth of
descriptors and fragments. If we detect that the next packet would
lead to BATCH_SIZE number of fragments sent being exceeded, we do not
send this packet and finish the batch. This packet is instead sent in
the next iteration of BATCH_SIZE fragments.
For Rx, we loop over all fragments we receive as usual, but for every
descriptor that we receive we call a new validation function called
is_frag_valid() to validate the consistency of this fragment. The code
then checks if the packet continues in the next frame. If so, it loops
over the next packet and performs the same validation. once we have
received the last fragment of the packet we also call the function
is_pkt_valid() to validate the packet as a whole. If we get to the end
of the batch and we are not at the end of the current packet, we back
out the partial packet and end the loop. Once we get into the receive
loop next time, we start over from the beginning of that packet. This
so the code becomes simpler at the cost of some performance.
The validation function is_frag_valid() checks that the sequence and
packet numbers are correct at the start and end of each fragment.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-19-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Wed, 19 Jul 2023 13:24:14 +0000 (15:24 +0200)]
xsk: add multi-buffer documentation
Add AF_XDP multi-buffer support documentation including two
pseudo-code samples.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-18-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:13 +0000 (15:24 +0200)]
i40e: xsk: add TX multi-buffer support
Set eop bit in TX desc command only for the last descriptor of the
packet and do not set for all preceding descriptors.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-17-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:12 +0000 (15:24 +0200)]
ice: xsk: Tx multi-buffer support
Most of this patch is about actually supporting XDP_TX action. Pure Tx
ZC support is only about looking at XDP_PKT_CONTD presence at options
field and based on that generating EOP bit on Tx HW descriptor. This is
that simple due to the implementation on
xsk_tx_peek_release_desc_batch() where we are making sure that last
produced descriptor is an EOP one.
Overwrite xdp_zc_max_segs with a value that defines max scatter-gatter
count on Tx side that HW can handle.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-16-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:11 +0000 (15:24 +0200)]
xsk: support ZC Tx multi-buffer in batch API
Modify xskq_cons_read_desc_batch() in a way that each processed
descriptor will be checked if it is an EOP one or not and act
accordingly to that.
Change the behavior of mentioned function to break the processing when
stumbling upon invalid descriptor instead of skipping it. Furthermore,
let us give only full packets down to ZC driver.
With these two assumptions ZC drivers will not have to take care of an
intermediate state of incomplete frames, which will simplify its
implementations a lot.
Last but not least, stop processing when count of frags would exceed
max supported segments on underlying device.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-15-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:10 +0000 (15:24 +0200)]
i40e: xsk: add RX multi-buffer support
This patch is inspired from the multi-buffer support in non-zc path for
i40e as well as from the patch to support zc on ice. Each subsequent
frag is added to skb_shared_info of the first frag for possible xdp_prog
use as well to xsk buffer list for accessing the buffers in af_xdp.
For XDP_PASS, new pages are allocated for frags and contents are copied
from memory backed by xsk_buff_pool.
Replace next_to_clean with next_to_process as done in non-zc path and
advance it for every buffer and change the semantics of next_to_clean to
point to the first buffer of a packet. Driver will use next_to_process
in the same way next_to_clean was used previously.
For the non multi-buffer case, next_to_process and next_to_clean will
always be the same since each packet consists of a single buffer.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-14-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:09 +0000 (15:24 +0200)]
ice: xsk: add RX multi-buffer support
This support is strongly inspired by work that introduced multi-buffer
support to regular Rx data path in ice. There are some differences,
though. When adding a frag, besides adding it to skb_shared_info, use
also fresh xsk_buff_add_frag() helper. Reason for doing both things is
that we can not rule out the fact that AF_XDP pipeline could use XDP
program that needs to access frame fragments. Without them being in
skb_shared_info it will not be possible. Another difference is that
XDP_PASS has to allocate a new pages for each frags and copy contents
from memory backed by xsk_buff_pool.
chain_len that is used for programming HW Rx descriptors no longer has
to be limited to 1 when xsk_pool is present - remove this restriction.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-13-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:08 +0000 (15:24 +0200)]
xsk: support mbuf on ZC RX
Given that skb_shared_info relies on skb_frag_t, in order to support
xskb chaining, introduce xdp_buff_xsk::xskb_list_node and
xsk_buff_pool::xskb_list.
This is needed so ZC drivers can add frags as xskb nodes which will make
it possible to handle it both when producing AF_XDP Rx descriptors as
well as freeing/recycling all the frags that a single frame carries.
Speaking of latter, update xsk_buff_free() to take care of list nodes.
For the former (adding as frags), introduce xsk_buff_add_frag() for ZC
drivers usage that is going to be used to add a frag to xskb list from
pool.
xsk_buff_get_frag() will be utilized by XDP_TX and, on contrary, will
return xdp_buff.
One of the previous patches added a wrapper for ZC Rx so implement xskb
list walk and production of Rx descriptors there.
On bind() path, bail out if socket wants to use ZC multi-buffer but
underlying netdev does not support it.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-12-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:07 +0000 (15:24 +0200)]
xsk: add new netlink attribute dedicated for ZC max frags
Introduce new netlink attribute NETDEV_A_DEV_XDP_ZC_MAX_SEGS that will
carry maximum fragments that underlying ZC driver is able to handle on
TX side. It is going to be included in netlink response only when driver
supports ZC. Any value higher than 1 implies multi-buffer ZC support on
underlying device.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-11-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:06 +0000 (15:24 +0200)]
xsk: discard zero length descriptors in Tx path
Descriptors with zero length are not supported by many NICs. To preserve
uniform behavior discard any zero length desc as invvalid desc.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-10-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:05 +0000 (15:24 +0200)]
xsk: add support for AF_XDP multi-buffer on Tx path
For transmitting an AF_XDP packet, allocate skb while processing the
first desc and copy data to it. The 'XDP_PKT_CONTD' flag in 'options'
field of the desc indicates the EOP status of the packet. If the current
desc is not EOP, store the skb, release the current desc and go
on to read the next descs.
Allocate a page for each subsequent desc, copy data to it and add it as
a frag in the skb stored in xsk. On processing EOP, transmit the skb
with frags. Addresses contained in descs have been already queued in
consumer queue and skb destructor updated the completion count.
On transmit failure cancel the releases, clear the descs from the
completion queue and consume the skb for retrying packet transmission.
For any invalid descriptor (invalid length/address/options) in the middle
of a packet, all pending descriptors will be dropped by xsk core along
with the invalid one and the next descriptor is treated as the start of
a new packet.
Maximum supported frames for a packet is MAX_SKB_FRAGS + 1. If it is
exceeded, all descriptors accumulated so far are dropped.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-9-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:04 +0000 (15:24 +0200)]
xsk: allow core/drivers to test EOP bit
Drivers are used to check for EOP bit whereas AF_XDP operates on
inverted logic - user space indicates that current frag is not the last
one and packet continues. For AF_XDP core needs, add xp_mb_desc() that
will simply test XDP_PKT_CONTD from xdp_desc::options, but in order to
preserve drivers default behavior, introduce an interface for ZC drivers
that will negate xp_mb_desc() result and therefore make it easier to
test EOP bit from during production of HW Tx descriptors.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-8-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:03 +0000 (15:24 +0200)]
xsk: introduce wrappers and helpers for supporting multi-buffer in Tx path
In Tx path, xsk core reserves space for each desc to be transmitted in
the completion queue and it's address contained in it is stored in the
skb destructor arg. After successful transmission the skb destructor
submits the addr marking completion.
To handle multiple descriptors per packet, now along with reserving
space for each descriptor, the corresponding address is also stored in
completion queue. The number of pending descriptors are stored in skb
destructor arg and is used by the skb destructor to update completions.
Introduce 'skb' in xdp_sock to store a partially built packet when
__xsk_generic_xmit() must return before it sees the EOP descriptor for
the current packet so that packet building can resume in next call of
__xsk_generic_xmit().
Helper functions are introduced to set and get the pending descriptors
in the skb destructor arg. Also, wrappers are introduced for storing
descriptor addresses, submitting and cancelling (for unsuccessful
transmissions) the number of completions.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-7-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:02 +0000 (15:24 +0200)]
xsk: add support for AF_XDP multi-buffer on Rx path
Add multi-buffer support for AF_XDP by extending the XDP multi-buffer
support to be reflected in user-space when a packet is redirected to
an AF_XDP socket.
In the XDP implementation, the NIC driver builds the xdp_buff from the
first frag of the packet and adds any subsequent frags in the skb_shinfo
area of the xdp_buff. In AF_XDP core, XDP buffers are allocated from
xdp_sock's pool and data is copied from the driver's xdp_buff and frags.
Once an allocated XDP buffer is full and there is still data to be
copied, the 'XDP_PKT_CONTD' flag in'options' field of the corresponding
xdp ring descriptor is set and passed to the application. When application
sees the aforementioned flag set it knows there is pending data for this
packet that will be carried in the following descriptors. If there is no
more data to be copied, the flag in 'options' field is cleared for that
descriptor signalling EOP to the application.
If application reads a batch of descriptors using for example the libxdp
interfaces, it is not guaranteed that the batch will end with a full
packet. It might end in the middle of a packet and the rest of the frames
of that packet will arrive at the beginning of the next batch.
AF_XDP ensures that only a complete packet (along with all its frags) is
sent to application.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-6-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:24:01 +0000 (15:24 +0200)]
xsk: move xdp_buff's data length check to xsk_rcv_check
If the data in xdp_buff exceeds the xsk frame length, the packet needs
to be dropped. This check is currently being done in __xsk_rcv(). Move
the described logic to xsk_rcv_check() so that such a xdp_buff will
only be dropped if the application does not support multi-buffer
(absence of XDP_USE_SG bind flag). This is applicable for all cases:
copy mode, zero copy mode as well as skb mode.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-5-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Maciej Fijalkowski [Wed, 19 Jul 2023 13:24:00 +0000 (15:24 +0200)]
xsk: prepare both copy and zero-copy modes to co-exist
Currently, __xsk_rcv_zc() is a function that is responsible for
producing AF_XDP Rx descriptors. It is used by both copy and zero-copy
mode. Both of these modes are going to differ when multi-buffer support
is going to be added. ZC will work on a chain of xdp_buff_xsk structs
whereas copy-mode is going to utilize skb_shared_info contents. This
means that ZC-specific changes would affect the copy mode.
Let's modify __xsk_rcv_zc() to work directly on xdp_buff_xsk so the
callsites have to retrieve this from xdp_buff. Also, introduce
xsk_rcv_zc() which will carry all the needed later changes for
supporting multi-buffer on ZC side that do not apply to copy mode.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-4-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:23:59 +0000 (15:23 +0200)]
xsk: introduce XSK_USE_SG bind flag for xsk socket
As of now xsk core drops any xdp_buff with data size greater than the
xsk frame_size as set by the af_xdp application. With multi-buffer
support introduced in the next patch xsk core can now split those
buffers into multiple descriptors provided the af_xdp application can
handle them. Such capability of the application needs to be independent
of the xdp_prog's frag support capability since there are cases where
even a single xdp_buffer may need to be split into multiple descriptors
owing to a smaller xsk frame size.
For e.g., with NIC rx_buffer size set to 4kB, a 3kB packet will
constitute of a single buffer and so will be sent as such to AF_XDP layer
irrespective of 'xdp.frags' capability of the XDP program. Now if the xsk
frame size is set to 2kB by the AF_XDP application, then the packet will
need to be split into 2 descriptors if AF_XDP application can handle
multi-buffer, else it needs to be dropped.
Applications can now advertise their frag handling capability to xsk core
so that xsk core can decide if it should drop or split xdp_buffs that
exceed xsk frame size. This is done using a new 'XSK_USE_SG' bind flag
for the xdp socket.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-3-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tirthendu Sarkar [Wed, 19 Jul 2023 13:23:58 +0000 (15:23 +0200)]
xsk: prepare 'options' in xdp_desc for multi-buffer use
Use the 'options' field in xdp_desc as a packet continuity marker. Since
'options' field was unused till now and was expected to be set to 0, the
'eop' descriptor will have it set to 0, while the non-eop descriptors
will have to set it to 1. This ensures legacy applications continue to
work without needing any change for single-buffer packets.
Add helper functions and extend xskq_prod_reserve_desc() to use the
'options' field.
Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Link: https://lore.kernel.org/r/20230719132421.584801-2-maciej.fijalkowski@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Menglong Dong [Wed, 19 Jul 2023 11:03:30 +0000 (19:03 +0800)]
bpf, x86: initialize the variable "first_off" in save_args()
As Dan Carpenter reported, the variable "first_off" which is passed to
clean_stack_garbage() in save_args() can be uninitialized, which can
cause runtime warnings with KMEMsan. Therefore, init it with 0.
Fixes:
473e3150e30a ("bpf, x86: allow function arguments up to 12 for TRACING")
Cc: Hao Peng <flyingpeng@tencent.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/bpf/
09784025-a812-493f-9829-
5e26c8691e07@moroto.mountain/
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Link: https://lore.kernel.org/r/20230719110330.2007949-1-imagedong@tencent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Wed, 19 Jul 2023 16:48:53 +0000 (09:48 -0700)]
Merge branch 'allow-bpf_map_sum_elem_count-for-all-program-types'
Anton Protopopov says:
====================
allow bpf_map_sum_elem_count for all program types
This series is a follow up to the recent change [1] which added
per-cpu insert/delete statistics for maps. The bpf_map_sum_elem_count
kfunc presented in the original series was only available to tracing
programs, so let's make it available to all.
The first patch makes types listed in the reg2btf_ids[] array to be
considered trusted by kfuncs.
The second patch allows to treat CONST_PTR_TO_MAP as trusted pointers from
kfunc's point of view by adding it to the reg2btf_ids[] array.
The third patch adds missing const to the map argument of the
bpf_map_sum_elem_count kfunc.
The fourth patch registers the bpf_map_sum_elem_count for all programs,
and patches selftests correspondingly.
[1] https://lore.kernel.org/bpf/
20230705160139.19967-1-aspsk@isovalent.com/
v1 -> v2:
* treat the whole reg2btf_ids array as trusted (Alexei)
====================
Link: https://lore.kernel.org/r/20230719092952.41202-1-aspsk@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Anton Protopopov [Wed, 19 Jul 2023 09:29:52 +0000 (09:29 +0000)]
bpf: allow any program to use the bpf_map_sum_elem_count kfunc
Register the bpf_map_sum_elem_count func for all programs, and update the
map_ptr subtest of the test_progs test to test the new functionality.
The usage is allowed as long as the pointer to the map is trusted (when
using tracing programs) or is a const pointer to map, as in the following
example:
struct {
__uint(type, BPF_MAP_TYPE_HASH);
...
} hash SEC(".maps");
...
static inline int some_bpf_prog(void)
{
struct bpf_map *map = (struct bpf_map *)&hash;
__s64 count;
count = bpf_map_sum_elem_count(map);
...
}
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
Link: https://lore.kernel.org/r/20230719092952.41202-5-aspsk@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Anton Protopopov [Wed, 19 Jul 2023 09:29:51 +0000 (09:29 +0000)]
bpf: make an argument const in the bpf_map_sum_elem_count kfunc
We use the map pointer only to read the counter values, no locking
involved, so mark the argument as const.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
Link: https://lore.kernel.org/r/20230719092952.41202-4-aspsk@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Anton Protopopov [Wed, 19 Jul 2023 09:29:50 +0000 (09:29 +0000)]
bpf: consider CONST_PTR_TO_MAP as trusted pointer to struct bpf_map
Add the BTF id of struct bpf_map to the reg2btf_ids array. This makes the
values of the CONST_PTR_TO_MAP type to be considered as trusted by kfuncs.
This, in turn, allows users to execute trusted kfuncs which accept `struct
bpf_map *` arguments from non-tracing programs.
While exporting the btf_bpf_map_id variable, save some bytes by defining
it as BTF_ID_LIST_GLOBAL_SINGLE (which is u32[1]) and not as BTF_ID_LIST
(which is u32[64]).
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
Link: https://lore.kernel.org/r/20230719092952.41202-3-aspsk@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Anton Protopopov [Wed, 19 Jul 2023 09:29:49 +0000 (09:29 +0000)]
bpf: consider types listed in reg2btf_ids as trusted
The reg2btf_ids array contains a list of types for which we can (and need)
to find a corresponding static BTF id. All the types in the list can be
considered as trusted for purposes of kfuncs.
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
Link: https://lore.kernel.org/r/20230719092952.41202-2-aspsk@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
David S. Miller [Wed, 19 Jul 2023 11:32:07 +0000 (12:32 +0100)]
Merge branch 'remove-RTO_ONLINK-users'
Guillaume Nault says:
====================
net: Remove more RTO_ONLINK users.
Code that initialise a flowi4 structure manually before doing a fib
lookup can easily avoid overloading ->flowi4_tos with the RTO_ONLINK
bit. They can just set ->flowi4_scope correctly instead.
Properly separating the routing scope from ->flowi4_tos will allow to
eventually convert this field to dscp_t (to ensure proper separation
between DSCP and ECN).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Mon, 17 Jul 2023 13:53:40 +0000 (15:53 +0200)]
sctp: Set TOS and routing scope independently for fib lookups.
There's no reason for setting the RTO_ONLINK flag in ->flowi4_tos as
RT_CONN_FLAGS() does. We can easily set ->flowi4_scope properly
instead. This makes the code more explicit and will allow to convert
->flowi4_tos to dscp_t in the future.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Mon, 17 Jul 2023 13:53:35 +0000 (15:53 +0200)]
dccp: Set TOS and routing scope independently for fib lookups.
There's no reason for setting the RTO_ONLINK flag in ->flowi4_tos as
RT_CONN_FLAGS() does. We can easily set ->flowi4_scope properly
instead. This makes the code more explicit and will allow to convert
->flowi4_tos to dscp_t in the future.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Mon, 17 Jul 2023 13:53:30 +0000 (15:53 +0200)]
gtp: Set TOS and routing scope independently for fib lookups.
There's no reason for setting the RTO_ONLINK flag in ->flowi4_tos as
RT_CONN_FLAGS() does. We can easily set ->flowi4_scope properly
instead. This makes the code more explicit and will allow to convert
->flowi4_tos to dscp_t in the future.
Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 19 Jul 2023 11:28:54 +0000 (12:28 +0100)]
Merge branch '40GbE' of git://git./linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2023-07-14 (i40e)
This series contains updates to i40e driver only.
Ivan Vecera adds waiting for VF to complete initialization on VF related
configuration callbacks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 19 Jul 2023 10:10:53 +0000 (11:10 +0100)]
Merge branch 'mptcp-selftests'
Matthieu Baerts says:
====================
selftests: mptcp: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CI executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
Regarding this patch set:
- The two first patches modify connect and userspace_pm selftests to
continue executing other tests if there is an error before the end.
This is what is done in the other MPTCP selftests.
- Patches 3-5 are refactoring the code in userspace_pm selftest to
reduce duplicated code, suppress some shellcheck warnings and prepare
subtests' support by using new helpers.
- Patch 6 adds new helpers in mptcp_lib.sh to easily support printing
the subtests results in the different MPTCP selftests.
- Patch 7-13 format subtests results in TAP 13 in the different MPTCP
selftests.
====================
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:33 +0000 (15:21 +0200)]
selftests: mptcp: userspace_pm: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in userspace_pm.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:32 +0000 (15:21 +0200)]
selftests: mptcp: sockopt: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in mptcp_sockopt.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:31 +0000 (15:21 +0200)]
selftests: mptcp: simult flows: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in simult_flows.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:30 +0000 (15:21 +0200)]
selftests: mptcp: diag: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in diag.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:29 +0000 (15:21 +0200)]
selftests: mptcp: join: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in mptcp_join.sh selftest.
In this selftest and before starting each subtest, the 'reset' function
is called. We can then check if the previous test has passed, failed or
has been skipped from there.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:28 +0000 (15:21 +0200)]
selftests: mptcp: pm_netlink: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in pm_netlink.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:27 +0000 (15:21 +0200)]
selftests: mptcp: connect: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch formats subtests results in TAP in mptcp_connect.sh selftest.
Link: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:26 +0000 (15:21 +0200)]
selftests: mptcp: lib: format subtests results in TAP
The current selftests infrastructure formats the results in TAP 13. This
version doesn't support subtests and only the end result of each
selftest is taken into account. It means that a single issue in a
subtest of a selftest containing multiple subtests forces the whole
selftest to be marked as failed. It also means that subtests results are
not tracked by CIs executing selftests.
MPTCP selftests run hundreds of various subtests. It is then important
to track each of them and not one result per selftest.
It is particularly interesting to do that when validating stable kernels
with the last version of the test suite: tests might fail because a
feature is not supported but the test didn't skip that part. In this
case, if subtests are not tracked, the whole selftest will be marked as
failed making the other subtests useless because their results are
ignored.
This patch adds some helpers in mptcp_lib.sh to be able to easily format
subtests results in TAP in the different MPTCP selftests.
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/368
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:25 +0000 (15:21 +0200)]
selftests: mptcp: userspace_pm: reduce dup code around printf
In this selftest, "printf" is always used with "stdbuf".
With a new helper, it is possible to call "stdbuf" only from one place.
This makes the code a bit clearer to read.
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:24 +0000 (15:21 +0200)]
selftests: mptcp: userspace_pm: uniform results printing
There are a few reasons to do that:
- When the tabs are not printed as 8 spaces, some results were not
properly aligned
- Some lines printing the test name were very long due to the use of a
lot of spaces/tabs at the end and stdbuf at the beginning.
- To reduce duplicated code, e.g. to print what has failed and set the
status
But by centralising how the test results are printed, this also prepares
future commits to avoid more duplicated code and ease the tracking of
the different subtests.
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:23 +0000 (15:21 +0200)]
selftests: mptcp: userspace_pm: fix shellcheck warnings
shellcheck recently helped to find an issue where a wrong variable name
was used. It is then good to fix the other harmless issues in order to
spot "real" ones later.
Here, three categories of warnings are ignored:
- SC2317: Command appears to be unreachable. The cleanup() function is
invoke indirectly via the EXIT trap.
- SC2034: Variable appears unused. The check_expected_one() function
takes the name of the variable in argument but it ends up reading the
content: indirect usage.
- SC2086: Double quote to prevent globbing and word splitting. This is
recommended but the current usage is correct and there is no need to
do all these modifications to be compliant with this rule.
One error has been fixed with SC2181: Check exit code directly with e.g.
'if ! mycmd;', not indirectly with $?.
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:22 +0000 (15:21 +0200)]
selftests: mptcp: userspace pm: don't stop if error
No more tests were executed after a failure but it is still interesting
to get results for all the tests to better understand what's still OK
and what's not after a modification.
Now we only exit earlier if the two connections cannot be established.
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Matthieu Baerts [Mon, 17 Jul 2023 13:21:21 +0000 (15:21 +0200)]
selftests: mptcp: connect: don't stop if error
No more tests were executed after a failure but it is still interesting
to get results for all the tests to better understand what's still OK
and what's not after a modification.
Now we only exit earlier if the basic tests are failing: no ping going
through namespaces or unable to transfer data on the loopback interface.
Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rohan G Thomas [Mon, 17 Jul 2023 12:06:03 +0000 (20:06 +0800)]
net: stmmac: xgmac: Fix L3L4 filter count
Get the exact count of L3L4 filters when the L3L4FNUM field of
HW_FEATURE1 register is >= 8. If L3L4FNUM < 8, then the number of L3L4
filters supported by XGMAC is equal to L3L4FNUM. From L3L4FNUM >= 8
the number of L3L4 filters goes on like 8, 16, 32, ... Current
maximum of L3L4FNUM = 10.
Also, fix the XGMAC_IDDR bitmask of L3L4_ADDR_CTRL register. IDDR
field starts from the 8th bit of the L3L4_ADDR_CTRL register. IDDR[3:0]
indicates the type of L3L4 filter register while IDDR[8:4] indicates
the filter number (0 to 31). So overall 9 bits are used for IDDR
(i.e. L3L4_ADDR_CTRL[16:8]) to address the registers of all the
filters. Currently, XGMAC_IDDR is GENMASK(15,8), causing issues
accessing L3L4 filters above 15 for those XGMACs configured with more
than 16 L3L4 filters.
Signed-off-by: Rohan G Thomas <rohan.g.thomas@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 19 Jul 2023 09:53:49 +0000 (10:53 +0100)]
Merge branch 'backup-nexthop-ID'
Ido Schimmel says:
====================
Add backup nexthop ID support
tl;dr
=====
This patchset adds a new bridge port attribute specifying the nexthop
object ID to attach to a redirected skb as tunnel metadata. The ID is
used by the VXLAN driver to choose the target VTEP for the skb. This is
useful for EVPN multi-homing, where we want to redirect local
(intra-rack) traffic upon carrier loss through one of the other VTEPs
(ES peers) connected to the target host.
Background
==========
In a typical EVPN multi-homing setup each host is multi-homed using a
set of links called ES (Ethernet Segment, i.e., LAG) to multiple leaf
switches in a rack. These switches act as VTEPs and are not directly
connected (as opposed to MLAG), but can communicate with each other (as
well as with VTEPs in remote racks) via spine switches over L3.
The control plane uses Type 1 routes [1] to create a mapping between an
ES and VTEPs where the ES has active links. In addition, the control
plane uses Type 2 routes [2] to create a mapping between {MAC, VLAN} and
an ES.
These tables are then used by the control plane to instruct VTEPs how to
reach remote hosts. For example, assuming {MAC X, VLAN Y} is accessible
via ES1 and this ES has active links to VTEP1 and VTEP2. The control
plane will program the following entries to a remote VTEP:
# ip nexthop add id 1 via $VTEP1_IP fdb
# ip nexthop add id 2 via $VTEP2_IP fdb
# ip nexthop add id 10 group 1/2 fdb
# bridge fdb add $MAC_X dev vx0 master extern_learn vlan $VLAN_Y
# bridge fdb add $MAC_Y dev vx0 self extern_learn nhid 10 src_vni $VNI_Y
Remote traffic towards the host will be load balanced between VTEP1 and
VTEP2. If the control plane notices a carrier loss on the ES1 link
connected to VTEP1, it will issue a Type 1 route withdraw, prompting
remote VTEPs to remove the effected nexthop from the group:
# ip nexthop replace id 10 group 2 fdb
Motivation
==========
While remote traffic can be redirected to a VTEP with an active ES link
by withdrawing a Type 1 route, the same is not true for local traffic. A
host that is multi-homed to VTEP1 and VTEP2 via another ES (e.g., ES2)
will send its traffic to {MAC X, VLAN Y} via one of these two switches,
according to its LAG hash algorithm which is not under our control. If
the traffic arrives at VTEP1 - which no longer has an active ES1 link -
it will be dropped due to the carrier loss.
In MLAG setups, the above problem is solved by redirecting the traffic
through the peer link upon carrier loss. This is achieved by defining
the peer link as the backup port of the host facing bond. For example:
# bridge link set dev bond0 backup_port bond_peer
Unlike MLAG, there is no peer link between the leaf switches in EVPN.
Instead, upon carrier loss, local traffic should be redirected through
one of the active ES peers. This can be achieved by defining the VXLAN
port as the backup port of the host facing bonds. For example:
# bridge link set dev es1_bond backup_port vx0
However, the VXLAN driver is not programmed with FDB entries for locally
attached hosts and therefore does not know to which VTEP to redirect the
traffic to. This will result in the traffic being replicated to all the
VTEPs (potentially hundreds) in the network and each VTEP dropping the
traffic, except for the active ES peer.
Avoiding the flooding by programming local FDB entries in the VXLAN
driver is not a viable solution as it requires to significantly increase
the number of programmed FDB entries.
Implementation
==============
The proposed solution is to create an FDB nexthop group for each ES with
the IP addresses of the active ES peers and set this ID as the backup
nexthop ID (new bridge port attribute) of the ES link. For example, on
VTEP1:
# ip nexthop add id 1 via $VTEP2_IP fdb
# ip nexthop add id 10 group 1 fdb
# bridge link set dev es1_bond backup_nhid 10
# bridge link set dev es1_bond backup_port vx0
When the ES link loses its carrier, traffic will be redirected to the
VXLAN port, but instead of only attaching the tunnel ID (i.e., VNI) as
tunnel metadata to the skb, the backup nexthop ID will be attached as
well. The VXLAN driver will then use this information to forward the skb
via the nexthop object associated with the ID, as if the skb hit an FDB
entry associated with this ID.
Testing
=======
A test for both the existing backup port attribute as well as the new
backup nexthop ID attribute is added in patch #4.
Patchset overview
=================
Patch #1 extends the tunnel key structure with the new nexthop ID field.
Patch #2 uses the new field in the VXLAN driver to forward packets via
the specified nexthop ID.
Patch #3 adds the new backup nexthop ID bridge port attribute and
adjusts the bridge driver to attach the ID as tunnel metadata upon
redirection.
Patch #4 adds a selftest.
iproute2 patches can be found here [3].
Changelog
=========
Since RFC [4]:
* Added Nik's tags.
[1] https://datatracker.ietf.org/doc/html/rfc7432#section-7.1
[2] https://datatracker.ietf.org/doc/html/rfc7432#section-7.2
[3] https://github.com/idosch/iproute2/tree/submit/backup_nhid_v1
[4] https://lore.kernel.org/netdev/
20230713070925.3955850-1-idosch@nvidia.com/
====================
Acked-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Mon, 17 Jul 2023 08:12:29 +0000 (11:12 +0300)]
selftests: net: Add bridge backup port and backup nexthop ID test
Add test cases for bridge backup port and backup nexthop ID, testing
both good and bad flows.
Example truncated output:
# ./test_bridge_backup_port.sh
[...]
Tests passed: 83
Tests failed: 0
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Mon, 17 Jul 2023 08:12:28 +0000 (11:12 +0300)]
bridge: Add backup nexthop ID support
Add a new bridge port attribute that allows attaching a nexthop object
ID to an skb that is redirected to a backup bridge port with VLAN
tunneling enabled.
Specifically, when redirecting a known unicast packet, read the backup
nexthop ID from the bridge port that lost its carrier and set it in the
bridge control block of the skb before forwarding it via the backup
port. Note that reading the ID from the bridge port should not result in
a cache miss as the ID is added next to the 'backup_port' field that was
already accessed. After this change, the 'state' field still stays on
the first cache line, together with other data path related fields such
as 'flags and 'vlgrp':
struct net_bridge_port {
struct net_bridge * br; /* 0 8 */
struct net_device * dev; /* 8 8 */
netdevice_tracker dev_tracker; /* 16 0 */
struct list_head list; /* 16 16 */
long unsigned int flags; /* 32 8 */
struct net_bridge_vlan_group * vlgrp; /* 40 8 */
struct net_bridge_port * backup_port; /* 48 8 */
u32 backup_nhid; /* 56 4 */
u8 priority; /* 60 1 */
u8 state; /* 61 1 */
u16 port_no; /* 62 2 */
/* --- cacheline 1 boundary (64 bytes) --- */
[...]
} __attribute__((__aligned__(8)));
When forwarding an skb via a bridge port that has VLAN tunneling
enabled, check if the backup nexthop ID stored in the bridge control
block is valid (i.e., not zero). If so, instead of attaching the
pre-allocated metadata (that only has the tunnel key set), allocate a
new metadata, set both the tunnel key and the nexthop object ID and
attach it to the skb.
By default, do not dump the new attribute to user space as a value of
zero is an invalid nexthop object ID.
The above is useful for EVPN multihoming. When one of the links
composing an Ethernet Segment (ES) fails, traffic needs to be redirected
towards the host via one of the other ES peers. For example, if a host
is multihomed to three different VTEPs, the backup port of each ES link
needs to be set to the VXLAN device and the backup nexthop ID needs to
point to an FDB nexthop group that includes the IP addresses of the
other two VTEPs. The VXLAN driver will extract the ID from the metadata
of the redirected skb, calculate its flow hash and forward it towards
one of the other VTEPs. If the ID does not exist, or represents an
invalid nexthop object, the VXLAN driver will drop the skb. This
relieves the bridge driver from the need to validate the ID.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Mon, 17 Jul 2023 08:12:27 +0000 (11:12 +0300)]
vxlan: Add support for nexthop ID metadata
VXLAN FDB entries can point to FDB nexthop objects. Each such object
includes the IP address(es) of remote VTEP(s) via which the target host
is accessible. Example:
# ip nexthop add id 1 via 192.0.2.1 fdb
# ip nexthop add id 2 via 192.0.2.17 fdb
# ip nexthop add id 1000 group 1/2 fdb
# bridge fdb add 00:11:22:33:44:55 dev vx0 self static nhid 1000 src_vni 10020
This is useful for EVPN multihoming where a single host can be connected
to multiple VTEPs. The source VTEP will calculate the flow hash of the
skb and forward it towards the IP address of one of the VTEPs member in
the nexthop group.
There are cases where an external entity (e.g., the bridge driver) can
provide not only the tunnel ID (i.e., VNI) of the skb, but also the ID
of the nexthop object via which the skb should be forwarded.
Therefore, in order to support such cases, when the VXLAN device is in
external / collect metadata mode and the tunnel info attached to the skb
is of bridge type, extract the nexthop ID from the tunnel info. If the
ID is valid (i.e., non-zero), forward the skb via the nexthop object
associated with the ID, as if the skb hit an FDB entry associated with
this ID.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Mon, 17 Jul 2023 08:12:26 +0000 (11:12 +0300)]
ip_tunnels: Add nexthop ID field to ip_tunnel_key
Extend the ip_tunnel_key structure with a field indicating the ID of the
nexthop object via which the skb should be routed.
The field is going to be populated in subsequent patches by the bridge
driver in order to indicate to the VXLAN driver which FDB nexthop object
to use in order to reach the target host.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mao Zhu [Tue, 18 Jul 2023 16:25:23 +0000 (00:25 +0800)]
can: ucan: Remove repeated word
Delete one of repeated word 'information' in comment.
Signed-off-by: Mao Zhu <zhumao001@208suo.com>
Link: https://lore.kernel.org/all/20230718163718.461137-1-mkl@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Wed, 19 Jul 2023 07:04:21 +0000 (09:04 +0200)]
Merge patch series "can: kvaser_pciefd: Add support for new Kvaser PCI Express devices"
Jimmy Assarsson <extja@kvaser.com> says:
This patch series adds support for a range of new Kvaser PCI Express
devices based on the SmartFusion2 SoC, to the kvaser_pciefd driver.
In the first patch, the hardware specific constants and functions are
moved into a driver_data struct.
In the second patch, we add the new devices and their hardware
specific constants and functions.
Changes in v2:
- Rebased on
can: kvaser_pciefd: Fixes and improvements
https://lore.kernel.org/all/
20230529134248.752036-1-extja@kvaser.com
- Dropped
can: kvaser_pciefd: Wrap register read and writes with macros
https://lore.kernel.org/linux-can/
20230523094354.83792-17-extja@kvaser.com
since the driver became a lot cleaner when using FIELD_{GET,PREP} and GENMASK.
Moved some parts of the patch into
can: kvaser_pciefd: Move hardware specific constants and functions into a driver_data struct
Removed macros reading/writing registers.
- Link to v1: https://lore.kernel.org/all/
20230523094354.83792-14-extja@kvaser.com
Link: https://lore.kernel.org/all/20230622151153.294844-1-extja@kvaser.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Jimmy Assarsson [Thu, 22 Jun 2023 15:11:53 +0000 (17:11 +0200)]
can: kvaser_pciefd: Add support for new Kvaser pciefd devices
Add support for new Kvaser pciefd devices, based on SmartFusion2 SoC.
Signed-off-by: Jimmy Assarsson <extja@kvaser.com>
Link: https://lore.kernel.org/all/20230622151153.294844-3-extja@kvaser.com
[mkl: mark structs as static]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Jimmy Assarsson [Thu, 22 Jun 2023 15:11:52 +0000 (17:11 +0200)]
can: kvaser_pciefd: Move hardware specific constants and functions into a driver_data struct
Move hardware specific address offsets, interrupt masks and DMA mapping
function, into struct kvaser_pciefd_driver_data, as a step towards adding
new devices based on different hardware.
Co-developed-by: Martin Jocic <majoc@kvaser.com>
Signed-off-by: Martin Jocic <majoc@kvaser.com>
Signed-off-by: Jimmy Assarsson <extja@kvaser.com>
Link: https://lore.kernel.org/all/20230622151153.294844-2-extja@kvaser.com
[mkl: mark structs as static]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Wed, 19 Jul 2023 06:57:05 +0000 (08:57 +0200)]
Merge patch series "can: xilinx_can: Add support for reset"
Michal Simek <michal.simek@amd.com> says:
IP core has option reset line which can be wired that's why add
support for optional reset.
Changes in v2:
- Add Conor's ACK
- Fix use-after-free in xcan_remove reported by Marc.
- Link to v1: https://lore.kernel.org/all/cover.
1689084227.git.michal.simek@amd.com
Link: https://lore.kernel.org/all/cover.1689164442.git.michal.simek@amd.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Rob Herring [Fri, 14 Jul 2023 17:47:57 +0000 (11:47 -0600)]
can: Explicitly include correct DT includes
The DT of_device.h and of_platform.h date back to the separate
of_platform_bus_type before it as merged into the regular platform bus.
As part of that merge prepping Arm DT support 13 years ago, they
"temporarily" include each other. They also include platform_device.h
and of.h. As a result, there's a pretty much random mix of those include
files used throughout the tree. In order to detangle these headers and
replace the implicit includes with struct declarations, users need to
explicitly include the correct includes.
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/all/20230714174757.4060748-1-robh@kernel.org
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Srinivas Neeli [Wed, 12 Jul 2023 12:20:46 +0000 (14:20 +0200)]
can: xilinx_can: Add support for controller reset
Add support for an optional reset for the CAN controller using the reset
driver. If the CAN node contains the "resets" property, then this logic
will perform CAN controller reset.
Signed-off-by: Srinivas Neeli <srinivas.neeli@amd.com>
Signed-off-by: Michal Simek <michal.simek@amd.com>
Link: https://lore.kernel.org/all/ab7e6503aa3343e39ead03c1797e765be6c50de2.1689164442.git.michal.simek@amd.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Michal Simek [Wed, 12 Jul 2023 12:20:45 +0000 (14:20 +0200)]
dt-bindings: can: xilinx_can: Add reset description
IP core has input for reset signal which can be connected that's why
describe optional reset property.
Signed-off-by: Michal Simek <michal.simek@amd.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Link: https://lore.kernel.org/all/bfaed896cc51af02fe5f290675313ab4dcab0d33.1689164442.git.michal.simek@amd.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Jakub Kicinski [Wed, 19 Jul 2023 02:01:07 +0000 (19:01 -0700)]
Merge branch 'remove-unnecessary-void-conversions'
Wu Yunchuan says:
====================
Remove unnecessary (void*) conversions
Remove (void*) conversions under "drivers/net" directory.
PATCH v2 link:
https://lore.kernel.org/all/
20230710063828.172593-1-suhui@nfschina.com/
PATCH v1 link:
https://lore.kernel.org/all/
20230628024121.1439149-1-yunchuan@nfschina.com/
====================
Link: https://lore.kernel.org/r/20230717030937.53818-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:12:29 +0000 (11:12 +0800)]
net: bna: Remove unnecessary (void*) conversions
No need cast (void*) to (struct bnad_tx_info *) or
(struct bnad_rx_info *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Link: https://lore.kernel.org/r/20230717031229.55169-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:12:21 +0000 (11:12 +0800)]
can: ems_pci: Remove unnecessary (void*) conversions
No need cast (void*) to (struct ems_pci_card *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Acked-by: Marc Kleine-Budde <mkl@pengutronix.de>
Link: https://lore.kernel.org/r/20230717031221.55073-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:12:12 +0000 (11:12 +0800)]
net: mdio: Remove unnecessary (void*) conversions
No need cast (void*) to (struct xgene_mdio_pdata *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20230717031212.54991-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:12:04 +0000 (11:12 +0800)]
ethernet: smsc: remove unnecessary (void*) conversions
No need cast (voidd*) to (struct smsc911x_data *) or
(struct smsc9420_pdata *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Link: https://lore.kernel.org/r/20230717031204.54912-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:11:54 +0000 (11:11 +0800)]
ice: remove unnecessary (void*) conversions
No need cast (void*) to (struct ice_ring_container *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Link: https://lore.kernel.org/r/20230717031154.54740-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:11:37 +0000 (11:11 +0800)]
net: hns: Remove unnecessary (void*) conversions
No need cast (void*) to (struct hns_mdio_device *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Reviewed-by: Hao Lan <lanhao@huawei.com>
Link: https://lore.kernel.org/r/20230717031137.54639-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Wu Yunchuan [Mon, 17 Jul 2023 03:11:28 +0000 (11:11 +0800)]
net: hns3: remove unnecessary (void*) conversions.
No need cast (void*) to (struct hns3_nic_priv *).
Signed-off-by: Wu Yunchuan <yunchuan@nfschina.com>
Reviewed-by: Hao Lan <lanhao@huawei.com>
Link: https://lore.kernel.org/r/20230717031128.54557-1-yunchuan@nfschina.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>