Hangbin Liu [Tue, 25 Jan 2022 08:17:14 +0000 (16:17 +0800)]
selftests/bpf/test_lwt_seg6local: use temp netns for testing
Use temp netns instead of hard code name for testing in case the
netns already exists.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Link: https://lore.kernel.org/r/20220125081717.1260849-5-liuhangbin@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hangbin Liu [Tue, 25 Jan 2022 08:17:13 +0000 (16:17 +0800)]
selftests/bpf/test_xdp_vlan: use temp netns for testing
Use temp netns instead of hard code name for testing in case the
netns already exists.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Link: https://lore.kernel.org/r/20220125081717.1260849-4-liuhangbin@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hangbin Liu [Tue, 25 Jan 2022 08:17:12 +0000 (16:17 +0800)]
selftests/bpf/test_xdp_veth: use temp netns for testing
Use temp netns instead of hard code name for testing in case the
netns already exists.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Link: https://lore.kernel.org/r/20220125081717.1260849-3-liuhangbin@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hangbin Liu [Tue, 25 Jan 2022 08:17:11 +0000 (16:17 +0800)]
selftests/bpf/test_xdp_redirect_multi: use temp netns for testing
Use temp netns instead of hard code name for testing in case the netns
already exists.
Remove the hard code interface index when creating the veth interfaces.
Because when the system loads some virtual interface modules, e.g. tunnels.
the ifindex of 2 will be used and the cmd will fail.
As the netns has not created if checking environment failed. Trap the
clean up function after checking env.
Fixes:
8955c1a32987 ("selftests/bpf/xdp_redirect_multi: Limit the tests in netns")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Acked-by: William Tu <u9012063@gmail.com>
Link: https://lore.kernel.org/r/20220125081717.1260849-2-liuhangbin@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hou Tao [Thu, 27 Jan 2022 08:32:40 +0000 (16:32 +0800)]
bpf, x86: Remove unnecessary handling of BPF_SUB atomic op
According to the LLVM commit (https://reviews.llvm.org/D72184),
sync_fetch_and_sub() is implemented as a negation followed by
sync_fetch_and_add(), so there will be no BPF_SUB op, thus just
remove it. BPF_SUB is also rejected by the verifier anyway.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Brendan Jackman <jackmanb@google.com>
Link: https://lore.kernel.org/bpf/20220127083240.1425481-1-houtao1@huawei.com
Alexei Starovoitov [Thu, 27 Jan 2022 20:03:47 +0000 (12:03 -0800)]
Merge branch 'bpf: add __user tagging support in vmlinux BTF'
Yonghong Song says:
====================
The __user attribute is currently mainly used by sparse for type checking.
The attribute indicates whether a memory access is in user memory address
space or not. Such information is important during tracing kernel
internal functions or data structures as accessing user memory often
has different mechanisms compared to accessing kernel memory. For example,
the perf-probe needs explicit command line specification to indicate a
particular argument or string in user-space memory ([1], [2], [3]).
Currently, vmlinux BTF is available in kernel with many distributions.
If __user attribute information is available in vmlinux BTF, the explicit
user memory access information from users will not be necessary as
the kernel can figure it out by itself with vmlinux BTF.
Besides the above possible use for perf/probe, another use case is
for bpf verifier. Currently, for bpf BPF_PROG_TYPE_TRACING type of bpf
programs, users can write direct code like
p->m1->m2
and "p" could be a function parameter. Without __user information in BTF,
the verifier will assume p->m1 accessing kernel memory and will generate
normal loads. Let us say "p" actually tagged with __user in the source
code. In such cases, p->m1 is actually accessing user memory and direct
load is not right and may produce incorrect result. For such cases,
bpf_probe_read_user() will be the correct way to read p->m1.
To support encoding __user information in BTF, a new attribute
__attribute__((btf_type_tag("<arbitrary_string>")))
is implemented in clang ([4]). For example, if we have
#define __user __attribute__((btf_type_tag("user")))
during kernel compilation, the attribute "user" information will
be preserved in dwarf. After pahole converting dwarf to BTF, __user
information will be available in vmlinux BTF and such information
can be used by bpf verifier, perf/probe or other use cases.
Currently btf_type_tag is only supported in clang (>= clang14) and
pahole (>= 1.23). gcc support is also proposed and under development ([5]).
In the rest of patch set, Patch 1 added support of __user btf_type_tag
during compilation. Patch 2 added bpf verifier support to utilize __user
tag information to reject bpf programs not using proper helper to access
user memories. Patches 3-5 are for bpf selftests which demonstrate verifier
can reject direct user memory accesses.
[1] http://lkml.kernel.org/r/
155789874562.26965.
10836126971405890891.stgit@devnote2
[2] http://lkml.kernel.org/r/
155789872187.26965.
4468456816590888687.stgit@devnote2
[3] http://lkml.kernel.org/r/
155789871009.26965.
14167558859557329331.stgit@devnote2
[4] https://reviews.llvm.org/D111199
[5] https://lore.kernel.org/bpf/
0cbeb2fb-1a18-f690-e360-
24b1c90c2a91@fb.com/
Changelog:
v2 -> v3:
- remove FLAG_DONTCARE enumerator and just use 0 as dontcare flag.
- explain how btf type_tag is encoded in btf type chain.
v1 -> v2:
- use MEM_USER flag for PTR_TO_BTF_ID reg type instead of a separate
field to encode __user tag.
- add a test with kernel function __sys_getsockname which has __user tagged
argument.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:27 +0000 (07:46 -0800)]
docs/bpf: clarify how btf_type_tag gets encoded in the type chain
Clarify where the BTF_KIND_TYPE_TAG gets encoded in the type chain,
so applications and kernel can properly parse them.
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154627.665163-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:22 +0000 (07:46 -0800)]
selftests/bpf: specify pahole version requirement for btf_tag test
Specify pahole version requirement (1.23) for btf_tag subtests
btf_type_tag_user_{mod1, mod2, vmlinux}.
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154622.663337-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:16 +0000 (07:46 -0800)]
selftests/bpf: add a selftest with __user tag
Added a selftest with three__user usages: a __user pointer-type argument
in bpf_testmod, a __user pointer-type struct member in bpf_testmod,
and a __user pointer-type struct member in vmlinux. In all cases,
directly accessing the user memory will result verification failure.
$ ./test_progs -v -n 22/3
...
libbpf: prog 'test_user1': BPF program load failed: Permission denied
libbpf: prog 'test_user1': -- BEGIN PROG LOAD LOG --
R1 type=ctx expected=fp
0: R1=ctx(id=0,off=0,imm=0) R10=fp0
; int BPF_PROG(test_user1, struct bpf_testmod_btf_type_tag_1 *arg)
0: (79) r1 = *(u64 *)(r1 +0)
func 'bpf_testmod_test_btf_type_tag_user_1' arg0 has btf_id 136561 type STRUCT 'bpf_testmod_btf_type_tag_1'
1: R1_w=user_ptr_bpf_testmod_btf_type_tag_1(id=0,off=0,imm=0)
; g = arg->a;
1: (61) r1 = *(u32 *)(r1 +0)
R1 invalid mem access 'user_ptr_'
...
#22/3 btf_tag/btf_type_tag_user_mod1:OK
$ ./test_progs -v -n 22/4
...
libbpf: prog 'test_user2': BPF program load failed: Permission denied
libbpf: prog 'test_user2': -- BEGIN PROG LOAD LOG --
R1 type=ctx expected=fp
0: R1=ctx(id=0,off=0,imm=0) R10=fp0
; int BPF_PROG(test_user2, struct bpf_testmod_btf_type_tag_2 *arg)
0: (79) r1 = *(u64 *)(r1 +0)
func 'bpf_testmod_test_btf_type_tag_user_2' arg0 has btf_id 136563 type STRUCT 'bpf_testmod_btf_type_tag_2'
1: R1_w=ptr_bpf_testmod_btf_type_tag_2(id=0,off=0,imm=0)
; g = arg->p->a;
1: (79) r1 = *(u64 *)(r1 +0) ; R1_w=user_ptr_bpf_testmod_btf_type_tag_1(id=0,off=0,imm=0)
; g = arg->p->a;
2: (61) r1 = *(u32 *)(r1 +0)
R1 invalid mem access 'user_ptr_'
...
#22/4 btf_tag/btf_type_tag_user_mod2:OK
$ ./test_progs -v -n 22/5
...
libbpf: prog 'test_sys_getsockname': BPF program load failed: Permission denied
libbpf: prog 'test_sys_getsockname': -- BEGIN PROG LOAD LOG --
R1 type=ctx expected=fp
0: R1=ctx(id=0,off=0,imm=0) R10=fp0
; int BPF_PROG(test_sys_getsockname, int fd, struct sockaddr *usockaddr,
0: (79) r1 = *(u64 *)(r1 +8)
func '__sys_getsockname' arg1 has btf_id 2319 type STRUCT 'sockaddr'
1: R1_w=user_ptr_sockaddr(id=0,off=0,imm=0)
; g = usockaddr->sa_family;
1: (69) r1 = *(u16 *)(r1 +0)
R1 invalid mem access 'user_ptr_'
...
#22/5 btf_tag/btf_type_tag_user_vmlinux:OK
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154616.659314-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:11 +0000 (07:46 -0800)]
selftests/bpf: rename btf_decl_tag.c to test_btf_decl_tag.c
The uapi btf.h contains the following declaration:
struct btf_decl_tag {
__s32 component_idx;
};
The skeleton will also generate a struct with name
"btf_decl_tag" for bpf program btf_decl_tag.c.
Rename btf_decl_tag.c to test_btf_decl_tag.c so
the corresponding skeleton struct name becomes
"test_btf_decl_tag". This way, we could include
uapi btf.h in prog_tests/btf_tag.c.
There is no functionality change for this patch.
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154611.656699-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:06 +0000 (07:46 -0800)]
bpf: reject program if a __user tagged memory accessed in kernel way
BPF verifier supports direct memory access for BPF_PROG_TYPE_TRACING type
of bpf programs, e.g., a->b. If "a" is a pointer
pointing to kernel memory, bpf verifier will allow user to write
code in C like a->b and the verifier will translate it to a kernel
load properly. If "a" is a pointer to user memory, it is expected
that bpf developer should be bpf_probe_read_user() helper to
get the value a->b. Without utilizing BTF __user tagging information,
current verifier will assume that a->b is a kernel memory access
and this may generate incorrect result.
Now BTF contains __user information, it can check whether the
pointer points to a user memory or not. If it is, the verifier
can reject the program and force users to use bpf_probe_read_user()
helper explicitly.
In the future, we can easily extend btf_add_space for other
address space tagging, for example, rcu/percpu etc.
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154606.654961-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 15:46:00 +0000 (07:46 -0800)]
compiler_types: define __user as __attribute__((btf_type_tag("user")))
The __user attribute is currently mainly used by sparse for type checking.
The attribute indicates whether a memory access is in user memory address
space or not. Such information is important during tracing kernel
internal functions or data structures as accessing user memory often
has different mechanisms compared to accessing kernel memory. For example,
the perf-probe needs explicit command line specification to indicate a
particular argument or string in user-space memory ([1], [2], [3]).
Currently, vmlinux BTF is available in kernel with many distributions.
If __user attribute information is available in vmlinux BTF, the explicit
user memory access information from users will not be necessary as
the kernel can figure it out by itself with vmlinux BTF.
Besides the above possible use for perf/probe, another use case is
for bpf verifier. Currently, for bpf BPF_PROG_TYPE_TRACING type of bpf
programs, users can write direct code like
p->m1->m2
and "p" could be a function parameter. Without __user information in BTF,
the verifier will assume p->m1 accessing kernel memory and will generate
normal loads. Let us say "p" actually tagged with __user in the source
code. In such cases, p->m1 is actually accessing user memory and direct
load is not right and may produce incorrect result. For such cases,
bpf_probe_read_user() will be the correct way to read p->m1.
To support encoding __user information in BTF, a new attribute
__attribute__((btf_type_tag("<arbitrary_string>")))
is implemented in clang ([4]). For example, if we have
#define __user __attribute__((btf_type_tag("user")))
during kernel compilation, the attribute "user" information will
be preserved in dwarf. After pahole converting dwarf to BTF, __user
information will be available in vmlinux BTF.
The following is an example with latest upstream clang (clang14) and
pahole 1.23:
[$ ~] cat test.c
#define __user __attribute__((btf_type_tag("user")))
int foo(int __user *arg) {
return *arg;
}
[$ ~] clang -O2 -g -c test.c
[$ ~] pahole -JV test.o
...
[1] INT int size=4 nr_bits=32 encoding=SIGNED
[2] TYPE_TAG user type_id=1
[3] PTR (anon) type_id=2
[4] FUNC_PROTO (anon) return=1 args=(3 arg)
[5] FUNC foo type_id=4
[$ ~]
You can see for the function argument "int __user *arg", its type is
described as
PTR -> TYPE_TAG(user) -> INT
The kernel can use this information for bpf verification or other
use cases.
Current btf_type_tag is only supported in clang (>= clang14) and
pahole (>= 1.23). gcc support is also proposed and under development ([5]).
[1] http://lkml.kernel.org/r/
155789874562.26965.
10836126971405890891.stgit@devnote2
[2] http://lkml.kernel.org/r/
155789872187.26965.
4468456816590888687.stgit@devnote2
[3] http://lkml.kernel.org/r/
155789871009.26965.
14167558859557329331.stgit@devnote2
[4] https://reviews.llvm.org/D111199
[5] https://lore.kernel.org/bpf/
0cbeb2fb-1a18-f690-e360-
24b1c90c2a91@fb.com/
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127154600.652613-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Pavel Begunkov [Thu, 27 Jan 2022 14:09:13 +0000 (14:09 +0000)]
cgroup/bpf: fast path skb BPF filtering
Even though there is a static key protecting from overhead from
cgroup-bpf skb filtering when there is nothing attached, in many cases
it's not enough as registering a filter for one type will ruin the fast
path for all others. It's observed in production servers I've looked
at but also in laptops, where registration is done during init by
systemd or something else.
Add a per-socket fast path check guarding from such overhead. This
affects both receive and transmit paths of TCP, UDP and other
protocols. It showed ~1% tx/s improvement in small payload UDP
send benchmarks using a real NIC and in a server environment and the
number jumps to 2-3% for preemtible kernels.
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/r/d8c58857113185a764927a46f4b5a058d36d3ec3.1643292455.git.asml.silence@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Thu, 27 Jan 2022 16:37:26 +0000 (08:37 -0800)]
selftests/bpf: fix a clang compilation error
When building selftests/bpf with clang
make -j LLVM=1
make -C tools/testing/selftests/bpf -j LLVM=1
I hit the following compilation error:
trace_helpers.c:152:9: error: variable 'found' is used uninitialized whenever 'while' loop exits because its condition is false [-Werror,-Wsometimes-uninitialized]
while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
trace_helpers.c:161:7: note: uninitialized use occurs here
if (!found)
^~~~~
trace_helpers.c:152:9: note: remove the condition if it is always true
while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1
trace_helpers.c:145:12: note: initialize the variable 'found' to silence this warning
bool found;
^
= false
It is possible that for sane /proc/self/maps we may never hit the above issue
in practice. But let us initialize variable 'found' properly to silence the
compilation error.
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127163726.1442032-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Magnus Karlsson [Tue, 25 Jan 2022 08:29:45 +0000 (09:29 +0100)]
selftests, xsk: Fix bpf_res cleanup test
After commit
710ad98c363a ("veth: Do not record rx queue hint in veth_xmit"),
veth no longer receives traffic on the same queue as it was sent on. This
breaks the bpf_res test for the AF_XDP selftests as the socket tied to
queue 1 will not receive traffic anymore.
Modify the test so that two sockets are tied to queue id 0 using a shared
umem instead. When killing the first socket enter the second socket into
the xskmap so that traffic will flow to it. This will still test that the
resources are not cleaned up until after the second socket dies, without
having to rely on veth supporting rx_queue hints.
Reported-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220125082945.26179-1-magnus.karlsson@gmail.com
Daniel Borkmann [Thu, 27 Jan 2022 16:25:33 +0000 (17:25 +0100)]
Merge branch 'xsk-batching'
Maciej Fijalkowski says:
====================
Unfortunately, similar scalability issues that were addressed for XDP
processing in ice, exist for XDP in the zero-copy driver used by AF_XDP.
Let's resolve them in mostly the same way as we did in [0] and utilize
the Tx batching API from XSK buffer pool.
Move the array of Tx descriptors that is used with batching approach to
the XSK buffer pool. This means that future users of this API will not
have to carry the array on their own side, they can simple refer to
pool's tx_desc array.
We also improve the Rx side where we extend ice_alloc_rx_buf_zc() to
handle the ring wrap and bump Rx tail more frequently. By doing so,
Rx side is adjusted to Tx and it was needed for l2fwd scenario.
Here are the improvements of performance numbers that this set brings
measured with xdpsock app in busy poll mode for 1 and 2 core modes.
Both Tx and Rx rings were sized to 1k length and busy poll budget was
256.
----------------------------------------------------------------
| txonly: | l2fwd | rxdrop
----------------------------------------------------------------
1C | 149% | 14% | 3%
----------------------------------------------------------------
2C | 134% | 20% | 5%
----------------------------------------------------------------
Next step will be to introduce batching onto Rx side.
v5:
* collect acks
* fix typos
* correct comments showing cache line boundaries in ice_tx_ring struct
v4 - address Alexandr's review:
* new patch (2) for making sure ring size is pow(2) when attaching
xsk socket
* don't open code ALIGN_DOWN (patch 3)
* resign from storing tx_thresh in ice_tx_ring (patch 4)
* scope variables in a better way for Tx batching (patch 7)
v3:
* drop likely() that was wrapping napi_complete_done (patch 1)
* introduce configurable Tx threshold (patch 2)
* handle ring wrap on Rx side when allocating buffers (patch 3)
* respect NAPI budget when cleaning Tx descriptors in ZC (patch 6)
v2:
* introduce new patch that resets @next_dd and @next_rs fields
* use batching API for AF_XDP Tx on ice side
[0]: https://lore.kernel.org/bpf/
20211015162908.145341-8-anthony.l.nguyen@intel.com/
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:46 +0000 (17:04 +0100)]
ice: xsk: Borrow xdp_tx_active logic from i40e
One of the things that commit
5574ff7b7b3d ("i40e: optimize AF_XDP Tx
completion path") introduced was the @xdp_tx_active field. Its usage
from i40e can be adjusted to ice driver and give us positive performance
results.
If the descriptor that @next_dd points to has been sent by HW (its DD
bit is set), then we are sure that at least quarter of the ring is ready
to be cleaned. If @xdp_tx_active is 0 which means that related xdp_ring
is not used for XDP_{TX, REDIRECT} workloads, then we know how many XSK
entries should placed to completion queue, IOW walking through the ring
can be skipped.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-9-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:45 +0000 (17:04 +0100)]
ice: xsk: Improve AF_XDP ZC Tx and use batching API
Apply the logic that was done for regular XDP from commit
9610bd988df9
("ice: optimize XDP_TX workloads") to the ZC side of the driver. On top
of that, introduce batching to Tx that is inspired by i40e's
implementation with adjustments to the cleaning logic - take into the
account NAPI budget in ice_clean_xdp_irq_zc().
Separating the stats structs onto separate cache lines seemed to improve
the performance.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-8-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:44 +0000 (17:04 +0100)]
ice: xsk: Avoid potential dead AF_XDP Tx processing
Commit
9610bd988df9 ("ice: optimize XDP_TX workloads") introduced
@next_dd and @next_rs to ice_tx_ring struct. Currently, their state is
not restored in ice_clean_tx_ring(), which was not causing any troubles
as the XDP rings are gone after we're done with XDP prog on interface.
For upcoming usage of mentioned fields in AF_XDP, this might expose us
to a potential dead Tx side. Scenario would look like following (based
on xdpsock):
- two xdpsock instances are spawned in Tx mode
- one of them is killed
- XDP prog is kept on interface due to the other xdpsock still running
* this means that XDP rings stayed in place
- xdpsock is launched again on same queue id that was terminated on
- @next_dd and @next_rs setting is bogus, therefore transmit side is
broken
To protect us from the above, restore the initial @next_rs and @next_dd
values when cleaning the Tx ring.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-7-maciej.fijalkowski@intel.com
Magnus Karlsson [Tue, 25 Jan 2022 16:04:43 +0000 (17:04 +0100)]
i40e: xsk: Move tmp desc array from driver to pool
Move desc_array from the driver to the pool. The reason behind this is
that we can then reuse this array as a temporary storage for descriptors
in all zero-copy drivers that use the batched interface. This will make
it easier to add batching to more drivers.
i40e is the only driver that has a batched Tx zero-copy
implementation, so no need to touch any other driver.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-6-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:42 +0000 (17:04 +0100)]
ice: Make Tx threshold dependent on ring length
XDP_TX workloads use a concept of Tx threshold that indicates the
interval of setting RS bit on descriptors which in turn tells the HW to
generate an interrupt to signal the completion of Tx on HW side. It is
currently based on a constant value of 32 which might not work out well
for various sizes of ring combined with for example batch size that can
be set via SO_BUSY_POLL_BUDGET.
Internal tests based on AF_XDP showed that most convenient setup of
mentioned threshold is when it is equal to quarter of a ring length.
Make use of recently introduced ICE_RING_QUARTER macro and use this
value as a substitute for ICE_TX_THRESH.
Align also ethtool -G callback so that next_dd/next_rs fields are up to
date in terms of the ring size.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-5-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:41 +0000 (17:04 +0100)]
ice: xsk: Handle SW XDP ring wrap and bump tail more often
Currently, if ice_clean_rx_irq_zc() processed the whole ring and
next_to_use != 0, then ice_alloc_rx_buf_zc() would not refill the whole
ring even if the XSK buffer pool would have enough free entries (either
from fill ring or the internal recycle mechanism) - it is because ring
wrap is not handled.
Improve the logic in ice_alloc_rx_buf_zc() to address the problem above.
Do not clamp the count of buffers that is passed to
xsk_buff_alloc_batch() in case when next_to_use + buffer count >=
rx_ring->count, but rather split it and have two calls to the mentioned
function - one for the part up until the wrap and one for the part after
the wrap.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-4-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:40 +0000 (17:04 +0100)]
ice: xsk: Force rings to be sized to power of 2
With the upcoming introduction of batching to XSK data path,
performance wise it will be the best to have the ring descriptor count
to be aligned to power of 2.
Check if ring sizes that user is going to attach the XSK socket fulfill
the condition above. For Tx side, although check is being done against
the Tx queue and in the end the socket will be attached to the XDP
queue, it is fine since XDP queues get the ring->count setting from Tx
queues.
Suggested-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-3-maciej.fijalkowski@intel.com
Maciej Fijalkowski [Tue, 25 Jan 2022 16:04:39 +0000 (17:04 +0100)]
ice: Remove likely for napi_complete_done
Remove the likely before napi_complete_done as this is the unlikely case
when busy-poll is used. Removing this has a positive performance impact
for busy-poll and no negative impact to the regular case.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20220125160446.78976-2-maciej.fijalkowski@intel.com
Jakub Kicinski [Wed, 26 Jan 2022 18:54:12 +0000 (10:54 -0800)]
bpf: remove unused static inlines
Remove two dead stubs, sk_msg_clear_meta() was never
used, use of xskq_cons_is_full() got replaced by
xsk_tx_writeable() in v5.10.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/r/20220126185412.2776254-1-kuba@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 26 Jan 2022 19:30:58 +0000 (11:30 -0800)]
selftests/bpf: fix uprobe offset calculation in selftests
Fix how selftests determine relative offset of a function that is
uprobed. Previously, there was an assumption that uprobed function is
always in the first executable region, which is not always the case
(libbpf CI hits this case now). So get_base_addr() approach in isolation
doesn't work anymore. So teach get_uprobe_offset() to determine correct
memory mapping and calculate uprobe offset correctly.
While at it, I merged together two implementations of
get_uprobe_offset() helper, moving powerpc64-specific logic inside (had
to add extra {} block to avoid unused variable error for insn).
Also ensured that uprobed functions are never inlined, but are still
static (and thus local to each selftest), by using a no-op asm volatile
block internally. I didn't want to keep them global __weak, because some
tests use uprobe's ref counter offset (to test USDT-like logic) which is
not compatible with non-refcounted uprobe. So it's nicer to have each
test uprobe target local to the file and guaranteed to not be inlined or
skipped by the compiler (which can happen with static functions,
especially if compiling selftests with -O2).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220126193058.3390292-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Wed, 26 Jan 2022 18:19:40 +0000 (10:19 -0800)]
selftests/bpf: Fix a clang compilation error
Compiling kernel and selftests/bpf with latest llvm like blow:
make -j LLVM=1
make -C tools/testing/selftests/bpf -j LLVM=1
I hit the following compilation error:
/.../prog_tests/log_buf.c:215:6: error: variable 'log_buf' is used uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized]
if (!ASSERT_OK_PTR(raw_btf_data, "raw_btf_data_good"))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../prog_tests/log_buf.c:264:7: note: uninitialized use occurs here
free(log_buf);
^~~~~~~
/.../prog_tests/log_buf.c:215:2: note: remove the 'if' if its condition is always false
if (!ASSERT_OK_PTR(raw_btf_data, "raw_btf_data_good"))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../prog_tests/log_buf.c:205:15: note: initialize the variable 'log_buf' to silence this warning
char *log_buf;
^
= NULL
1 error generated.
Compiler rightfully detected that log_buf is uninitialized in one of failure path as indicated
in the above.
Proper initialization of 'log_buf' variable fixed the issue.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220126181940.4105997-1-yhs@fb.com
Stanislav Fomichev [Wed, 26 Jan 2022 00:13:40 +0000 (16:13 -0800)]
bpf: fix register_btf_kfunc_id_set for !CONFIG_DEBUG_INFO_BTF
Commit
dee872e124e8 ("bpf: Populate kfunc BTF ID sets in struct btf")
breaks loading of some modules when CONFIG_DEBUG_INFO_BTF is not set.
register_btf_kfunc_id_set returns -ENOENT to the callers when
there is no module btf. Let's return 0 (success) instead to let
those modules work in !CONFIG_DEBUG_INFO_BTF cases.
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Fixes:
dee872e124e8 ("bpf: Populate kfunc BTF ID sets in struct btf")
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20220126001340.1573649-1-sdf@google.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Felix Maurer [Tue, 25 Jan 2022 16:58:23 +0000 (17:58 +0100)]
selftests: bpf: Less strict size check in sockopt_sk
Originally, the kernel strictly checked the size of the optval in
getsockopt(TCP_ZEROCOPY_RECEIVE) to be equal to sizeof(struct
tcp_zerocopy_receive). With
c8856c0514549, this was changed to allow
optvals of different sizes.
The bpf code in the sockopt_sk test was still performing the strict size
check. This fix adapts the kernel behavior from
c8856c0514549 in the
selftest, i.e., just check if the required fields are there.
Fixes:
9cacf81f81611 ("bpf: Remove extra lock_sock for TCP_ZEROCOPY_RECEIVE")
Signed-off-by: Felix Maurer <fmaurer@redhat.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/6f569cca2e45473f9a724d54d03fdfb45f29e35f.1643129402.git.fmaurer@redhat.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Wed, 26 Jan 2022 01:59:07 +0000 (17:59 -0800)]
Merge branch 'libbpf: deprecate some setter and getter APIs'
Andrii Nakryiko says:
====================
Another batch of simple deprecations. One of the last few, hopefully, as we
are getting close to deprecating all the planned APIs/features. See
individual patches for details.
v1->v2:
- rebased on latest bpf-next, fixed Closes: reference.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:54 +0000 (11:42 -0800)]
perf: use generic bpf_program__set_type() to set BPF prog type
bpf_program__set_<type>() APIs are deprecated, use generic
bpf_program__set_type() APIs instead.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-8-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:53 +0000 (11:42 -0800)]
samples/bpf: use preferred getters/setters instead of deprecated ones
Use preferred setter and getter APIs instead of deprecated ones.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-7-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:52 +0000 (11:42 -0800)]
selftests/bpf: use preferred setter/getter APIs instead of deprecated ones
Switch to using preferred setters and getters instead of deprecated ones.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-6-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:51 +0000 (11:42 -0800)]
bpftool: use preferred setters/getters instead of deprecated ones
Use bpf_program__type() instead of discouraged bpf_program__get_type().
Also switch to bpf_map__set_max_entries() instead of bpf_map__resize().
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:50 +0000 (11:42 -0800)]
libbpf: deprecate bpf_program__is_<type>() and bpf_program__set_<type>() APIs
Not sure why these APIs were added in the first place instead of
a completely generic (and not requiring constantly adding new APIs with
each new BPF program type) bpf_program__type() and
bpf_program__set_type() APIs. But as it is right now, there are 13 such
specialized is_type/set_type APIs, while latest kernel is already at 30+
BPF program types.
Instead of completing the set of APIs and keep chasing kernel's
bpf_prog_type enum, deprecate existing subset and recommend generic
bpf_program__type() and bpf_program__set_type() APIs.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:49 +0000 (11:42 -0800)]
libbpf: deprecate bpf_map__resize()
Deprecated bpf_map__resize() in favor of bpf_map__set_max_entries()
setter. In addition to having a surprising name (users often don't
realize that they need to use bpf_map__resize()), the name also implies
some magic way of resizing BPF map after it is created, which is clearly
not the case.
Another minor annoyance is that bpf_map__resize() disallows 0 value for
max_entries, which in some cases is totally acceptable (e.g., like for
BPF perf buf case to let libbpf auto-create one buffer per each
available CPU core).
[0] Closes: https://github.com/libbpf/libbpf/issues/304
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Mon, 24 Jan 2022 19:42:48 +0000 (11:42 -0800)]
libbpf: hide and discourage inconsistently named getters
Move a bunch of "getters" into libbpf_legacy.h to keep them there in
libbpf 1.0. See [0] for discussion of "Discouraged APIs". These getters
don't add any maintenance burden and are simple alias, but they are
inconsistent in naming. So keep them in libbpf_legacy.h instead of
libbpf.h to "hide" them in favor of preferred getters ([1]). Also add two
missing getters: bpf_program__type() and bpf_program__expected_attach_type().
[0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#handling-deprecation-of-apis-and-functionality
[1] Closes: https://github.com/libbpf/libbpf/issues/307
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124194254.2051434-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Tue, 25 Jan 2022 04:55:27 +0000 (20:55 -0800)]
Merge branch 'Fix the incorrect register read for syscalls on x86_64'
Kenta Tada says:
====================
Currently, rcx is read as the fourth parameter of syscall on x86_64.
But x86_64 Linux System Call convention uses r10 actually.
This commit adds the wrapper for users who want to access to
syscall params to analyze the user space.
Changelog:
----------
v1 -> v2:
- Rebase to current bpf-next
https://lore.kernel.org/bpf/
20211222213924.1869758-1-andrii@kernel.org/
v2 -> v3:
- Modify the definition of SYSCALL macros for only targeted archs.
- Define __BPF_TARGET_MISSING variants for completeness.
- Remove CORE variants. These macros will not be used.
- Add a selftest.
v3 -> v4:
- Modify a selftest not to use serial tests.
- Modify a selftest to use ASSERT_EQ().
- Extract syscall wrapper for all the other tests.
- Add CORE variants.
v4 -> v5:
- Modify the CORE variant macro not to read memory directly.
- Remove the unnecessary comment.
- Add a selftest for the CORE variant.
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Kenta Tada [Mon, 24 Jan 2022 14:16:22 +0000 (23:16 +0900)]
selftests/bpf: Add a test to confirm PT_REGS_PARM4_SYSCALL
Add a selftest to verify the behavior of PT_REGS_xxx
and the CORE variant.
Signed-off-by: Kenta Tada <Kenta.Tada@sony.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220124141622.4378-4-Kenta.Tada@sony.com
Kenta Tada [Mon, 24 Jan 2022 14:16:21 +0000 (23:16 +0900)]
libbpf: Fix the incorrect register read for syscalls on x86_64
Currently, rcx is read as the fourth parameter of syscall on x86_64.
But x86_64 Linux System Call convention uses r10 actually.
This commit adds the wrapper for users who want to access to
syscall params to analyze the user space.
Signed-off-by: Kenta Tada <Kenta.Tada@sony.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220124141622.4378-3-Kenta.Tada@sony.com
Kenta Tada [Mon, 24 Jan 2022 14:16:20 +0000 (23:16 +0900)]
selftests/bpf: Extract syscall wrapper
Extract the helper to set up SYS_PREFIX for fentry and kprobe selftests
that use __x86_sys_* attach functions.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Kenta Tada <Kenta.Tada@sony.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220124141622.4378-2-Kenta.Tada@sony.com
Christy Lee [Tue, 25 Jan 2022 01:09:17 +0000 (17:09 -0800)]
libbpf: Mark bpf_object__open_xattr() deprecated
Mark bpf_object__open_xattr() as deprecated, use
bpf_object__open_file() instead.
[0] Closes: https://github.com/libbpf/libbpf/issues/287
Signed-off-by: Christy Lee <christylee@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220125010917.679975-1-christylee@fb.com
Andrii Nakryiko [Tue, 25 Jan 2022 04:37:34 +0000 (20:37 -0800)]
Merge branch 'deprecate bpf_object__open_buffer() API'
Christy Lee says:
====================
Deprecate bpf_object__open_buffer() API, replace all usage
with bpf_object__open_mem().
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Christy Lee [Tue, 25 Jan 2022 00:59:23 +0000 (16:59 -0800)]
perf: Stop using bpf_object__open_buffer() API
bpf_object__open_buffer() API is deprecated, use the unified opts
bpf_object__open_mem() API in perf instead. This requires at least
libbpf 0.0.6.
Signed-off-by: Christy Lee <christylee@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220125005923.418339-3-christylee@fb.com
Christy Lee [Tue, 25 Jan 2022 00:59:22 +0000 (16:59 -0800)]
libbpf: Mark bpf_object__open_buffer() API deprecated
Deprecate bpf_object__open_buffer() API in favor of the unified
opts-based bpf_object__open_mem() API.
Signed-off-by: Christy Lee <christylee@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220125005923.418339-2-christylee@fb.com
Alexei Starovoitov [Tue, 25 Jan 2022 03:55:41 +0000 (19:55 -0800)]
Merge branch 'Add bpf_copy_from_user_task helper and sleepable bpf iterator programs'
Kenny Yu says:
====================
This patch series makes the following changes:
* Adds a new bpf helper `bpf_copy_from_user_task` to read user space
memory from a different task.
* Adds the ability to create sleepable bpf iterator programs.
As an example of how this will be used, at Meta we are using bpf task
iterator programs and this new bpf helper to read C++ async stack traces of
a running process for debugging C++ binaries in production.
Changes since v6:
* Split first patch into two patches: first patch to add support
for bpf iterators to use sleepable helpers, and the second to add
the new bpf helper.
* Simplify implementation of `bpf_copy_from_user_task` based on feedback.
* Add to docs that the destination buffer will be zero-ed on error.
Changes since v5:
* Rename `bpf_access_process_vm` to `bpf_copy_from_user_task`.
* Change return value to be all-or-nothing. If we get a partial read,
memset all bytes to 0 and return -EFAULT.
* Add to docs that the helper can only be used by sleepable BPF programs.
* Fix nits in selftests.
Changes since v4:
* Make `flags` into u64.
* Use `user_ptr` arg name to be consistent with `bpf_copy_from_user`.
* Add an extra check in selftests to verify access_process_vm calls
succeeded.
Changes since v3:
* Check if `flags` is 0 and return -EINVAL if not.
* Rebase on latest bpf-next branch and fix merge conflicts.
Changes since v2:
* Reorder arguments in `bpf_access_process_vm` to match existing related
bpf helpers (e.g. `bpf_probe_read_kernel`, `bpf_probe_read_user`,
`bpf_copy_from_user`).
* `flags` argument is provided for future extensibility and is not
currently used, and we always invoke `access_process_vm` with no flags.
* Merge bpf helper patch and `bpf_iter_run_prog` patch together for better
bisectability in case of failures.
* Clean up formatting and comments in selftests.
Changes since v1:
* Fixed "Invalid wait context" issue in `bpf_iter_run_prog` by using
`rcu_read_lock_trace()` for sleepable bpf iterator programs.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kenny Yu [Mon, 24 Jan 2022 18:54:03 +0000 (10:54 -0800)]
selftests/bpf: Add test for sleepable bpf iterator programs
This adds a test for bpf iterator programs to make use of sleepable
bpf helpers.
Signed-off-by: Kenny Yu <kennyyu@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124185403.468466-5-kennyyu@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kenny Yu [Mon, 24 Jan 2022 18:54:02 +0000 (10:54 -0800)]
libbpf: Add "iter.s" section for sleepable bpf iterator programs
This adds a new bpf section "iter.s" to allow bpf iterator programs to
be sleepable.
Signed-off-by: Kenny Yu <kennyyu@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124185403.468466-4-kennyyu@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kenny Yu [Mon, 24 Jan 2022 18:54:01 +0000 (10:54 -0800)]
bpf: Add bpf_copy_from_user_task() helper
This adds a helper for bpf programs to read the memory of other
tasks.
As an example use case at Meta, we are using a bpf task iterator program
and this new helper to print C++ async stack traces for all threads of
a given process.
Signed-off-by: Kenny Yu <kennyyu@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124185403.468466-3-kennyyu@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kenny Yu [Mon, 24 Jan 2022 18:54:00 +0000 (10:54 -0800)]
bpf: Add support for bpf iterator programs to use sleepable helpers
This patch allows bpf iterator programs to use sleepable helpers by
changing `bpf_iter_run_prog` to use the appropriate synchronization.
With sleepable bpf iterator programs, we can no longer use
`rcu_read_lock()` and must use `rcu_read_lock_trace()` instead
to protect the bpf program.
Signed-off-by: Kenny Yu <kennyyu@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220124185403.468466-2-kennyyu@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jakub Kicinski [Mon, 24 Jan 2022 23:42:28 +0000 (15:42 -0800)]
Merge https://git./linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2022-01-24
We've added 80 non-merge commits during the last 14 day(s) which contain
a total of 128 files changed, 4990 insertions(+), 895 deletions(-).
The main changes are:
1) Add XDP multi-buffer support and implement it for the mvneta driver,
from Lorenzo Bianconi, Eelco Chaudron and Toke Høiland-Jørgensen.
2) Add unstable conntrack lookup helpers for BPF by using the BPF kfunc
infra, from Kumar Kartikeya Dwivedi.
3) Extend BPF cgroup programs to export custom ret value to userspace via
two helpers bpf_get_retval() and bpf_set_retval(), from YiFei Zhu.
4) Add support for AF_UNIX iterator batching, from Kuniyuki Iwashima.
5) Complete missing UAPI BPF helper description and change bpf_doc.py script
to enforce consistent & complete helper documentation, from Usama Arif.
6) Deprecate libbpf's legacy BPF map definitions and streamline XDP APIs to
follow tc-based APIs, from Andrii Nakryiko.
7) Support BPF_PROG_QUERY for BPF programs attached to sockmap, from Di Zhu.
8) Deprecate libbpf's bpf_map__def() API and replace users with proper getters
and setters, from Christy Lee.
9) Extend libbpf's btf__add_btf() with an additional hashmap for strings to
reduce overhead, from Kui-Feng Lee.
10) Fix bpftool and libbpf error handling related to libbpf's hashmap__new()
utility function, from Mauricio Vásquez.
11) Add support to BTF program names in bpftool's program dump, from Raman Shukhau.
12) Fix resolve_btfids build to pick up host flags, from Connor O'Brien.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (80 commits)
selftests, bpf: Do not yet switch to new libbpf XDP APIs
selftests, xsk: Fix rx_full stats test
bpf: Fix flexible_array.cocci warnings
xdp: disable XDP_REDIRECT for xdp frags
bpf: selftests: add CPUMAP/DEVMAP selftests for xdp frags
bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest
net: xdp: introduce bpf_xdp_pointer utility routine
bpf: generalise tail call map compatibility check
libbpf: Add SEC name for xdp frags programs
bpf: selftests: update xdp_adjust_tail selftest to include xdp frags
bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature
bpf: introduce frags support to bpf_prog_test_run_xdp()
bpf: move user_size out of bpf_test_init
bpf: add frags support to xdp copy helpers
bpf: add frags support to the bpf_xdp_adjust_tail() API
bpf: introduce bpf_xdp_get_buff_len helper
net: mvneta: enable jumbo frames if the loaded XDP program support frags
bpf: introduce BPF_F_XDP_HAS_FRAGS flag in prog_flags loading the ebpf program
net: mvneta: add frags support to XDP_TX
xdp: add frags support to xdp_return_{buff/frame}
...
====================
Link: https://lore.kernel.org/r/20220124221235.18993-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Daniel Borkmann [Mon, 24 Jan 2022 22:01:28 +0000 (23:01 +0100)]
selftests, bpf: Do not yet switch to new libbpf XDP APIs
Revert commit
544356524dd6 ("selftests/bpf: switch to new libbpf XDP APIs")
for now given this will heavily conflict with
4b27480dcaa7 ("bpf/selftests:
convert xdp_link test to ASSERT_* macros") upon merge. Andrii agreed to redo
the conversion cleanly after trees merged.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Jakub Kicinski [Mon, 24 Jan 2022 20:17:58 +0000 (12:17 -0800)]
Merge tag 'linux-can-fixes-for-5.17-
20220124' of git://git./linux/kernel/git/mkl/linux-can
Marc Kleine-Budde says:
====================
pull-request: can 2022-01-24
The first patch updates the email address of Brian Silverman from his
former employer to his private address.
The next patch fixes DT bindings information for the tcan4x5x SPI CAN
driver.
The following patch targets the m_can driver and fixes the
introduction of FIFO bulk read support.
Another patch for the tcan4x5x driver, which fixes the max register
value for the regmap config.
The last patch for the flexcan driver marks the RX mailbox support for
the MCF5441X as support.
* tag 'linux-can-fixes-for-5.17-
20220124' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can:
can: flexcan: mark RX via mailboxes as supported on MCF5441X
can: tcan4x5x: regmap: fix max register value
can: m_can: m_can_fifo_{read,write}: don't read or write from/to FIFO if length is 0
dt-bindings: can: tcan4x5x: fix mram-cfg RX FIFO config
mailmap: update email address of Brian Silverman
====================
Link: https://lore.kernel.org/r/20220124175955.3464134-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Marc Kleine-Budde [Fri, 21 Jan 2022 08:22:34 +0000 (09:22 +0100)]
can: flexcan: mark RX via mailboxes as supported on MCF5441X
Most flexcan IP cores support 2 RX modes:
- FIFO
- mailbox
The flexcan IP core on the MCF5441X cannot receive CAN RTR messages
via mailboxes. However the mailbox mode is more performant. The commit
|
1c45f5778a3b ("can: flexcan: add ethtool support to change rx-rtr setting during runtime")
added support to switch from FIFO to mailbox mode on these cores.
After testing the mailbox mode on the MCF5441X by Angelo Dureghello,
this patch marks it (without RTR capability) as supported. Further the
IP core overview table is updated, that RTR reception via mailboxes is
not supported.
Link: https://lore.kernel.org/all/20220121084425.3141218-1-mkl@pengutronix.de
Tested-by: Angelo Dureghello <angelo@kernel-space.org>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Fri, 14 Jan 2022 17:50:54 +0000 (18:50 +0100)]
can: tcan4x5x: regmap: fix max register value
The MRAM of the tcan4x5x has a size of 2K and starts at 0x8000. There
are no further registers in the tcan4x5x making 0x87fc the biggest
addressable register.
This patch fixes the max register value of the regmap config from
0x8ffc to 0x87fc.
Fixes:
6e1caaf8ed22 ("can: tcan4x5x: fix max register value")
Link: https://lore.kernel.org/all/20220119064011.2943292-1-mkl@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Fri, 14 Jan 2022 14:35:01 +0000 (15:35 +0100)]
can: m_can: m_can_fifo_{read,write}: don't read or write from/to FIFO if length is 0
In order to optimize FIFO access, especially on m_can cores attached
to slow busses like SPI, in patch
|
e39381770ec9 ("can: m_can: Disable IRQs on FIFO bus errors")
bulk read/write support has been added to the m_can_fifo_{read,write}
functions.
That change leads to the tcan driver to call
regmap_bulk_{read,write}() with a length of 0 (for CAN frames with 0
data length). regmap treats this as an error:
| tcan4x5x spi1.0 tcan4x5x0: FIFO write returned -22
This patch fixes the problem by not calling the
cdev->ops->{read,write)_fifo() in case of a 0 length read/write.
Fixes:
e39381770ec9 ("can: m_can: Disable IRQs on FIFO bus errors")
Link: https://lore.kernel.org/all/20220114155751.2651888-1-mkl@pengutronix.de
Cc: stable@vger.kernel.org
Cc: Matt Kline <matt@bitbashing.io>
Cc: Chandrasekar Ramakrishnan <rcsekar@samsung.com>
Reported-by: Michael Anochin <anochin@photo-meter.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Fri, 14 Jan 2022 17:47:41 +0000 (18:47 +0100)]
dt-bindings: can: tcan4x5x: fix mram-cfg RX FIFO config
This tcan4x5x only comes with 2K of MRAM, a RX FIFO with a dept of 32
doesn't fit into the MRAM. Use a depth of 16 instead.
Fixes:
4edd396a1911 ("dt-bindings: can: tcan4x5x: Add DT bindings for TCAN4x5X driver")
Link: https://lore.kernel.org/all/20220119062951.2939851-1-mkl@pengutronix.de
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Marc Kleine-Budde [Mon, 10 Jan 2022 08:20:19 +0000 (09:20 +0100)]
mailmap: update email address of Brian Silverman
Brian Silverman's address at bluerivertech.com is not valid anymore,
use Brian's private email address instead.
Link: https://lore.kernel.org/all/20220110082359.2019735-1-mkl@pengutronix.de
Cc: Brian Silverman <bsilver16384@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Magnus Karlsson [Fri, 21 Jan 2022 12:35:08 +0000 (13:35 +0100)]
selftests, xsk: Fix rx_full stats test
Fix the rx_full stats test so that it correctly reports pass even when
the fill ring is not full of buffers.
Fixes:
872a1184dbf2 ("selftests: xsk: Put the same buffer only once in the fill ring")
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20220121123508.12759-1-magnus.karlsson@gmail.com
kernel test robot [Sat, 22 Jan 2022 11:09:44 +0000 (12:09 +0100)]
bpf: Fix flexible_array.cocci warnings
Zero-length and one-element arrays are deprecated, see:
Documentation/process/deprecated.rst
Flexible-array members should be used instead.
Generated by: scripts/coccinelle/misc/flexible_array.cocci
Fixes:
c1ff181ffabc ("selftests/bpf: Extend kfunc selftests")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
Signed-off-by: Julia Lawall <julia.lawall@inria.fr>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/bpf/alpine.DEB.2.22.394.2201221206320.12220@hadrien
Jisheng Zhang [Sun, 23 Jan 2022 12:27:58 +0000 (20:27 +0800)]
net: stmmac: remove unused members in struct stmmac_priv
The tx_coalesce and mii_irq are not used at all now, so remove them.
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Christophe JAILLET [Sun, 23 Jan 2022 06:53:46 +0000 (07:53 +0100)]
net: atlantic: Use the bitmap API instead of hand-writing it
Simplify code by using bitmap_weight() and bitmap_zero() instead of
hand-writing these functions.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Sat, 22 Jan 2022 11:40:56 +0000 (06:40 -0500)]
ping: fix the sk_bound_dev_if match in ping_lookup
When 'ping' changes to use PING socket instead of RAW socket by:
# sysctl -w net.ipv4.ping_group_range="0 100"
the selftests 'router_broadcast.sh' will fail, as such command
# ip vrf exec vrf-h1 ping -I veth0 198.51.100.255 -b
can't receive the response skb by the PING socket. It's caused by mismatch
of sk_bound_dev_if and dif in ping_rcv() when looking up the PING socket,
as dif is vrf-h1 if dif's master was set to vrf-h1.
This patch is to fix this regression by also checking the sk_bound_dev_if
against sdif so that the packets can stil be received even if the socket
is not bound to the vrf device but to the real iif.
Fixes:
c319b4d76b9e ("net: ipv4: add IPPROTO_ICMP socket kind")
Reported-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wen Gu [Sat, 22 Jan 2022 09:43:09 +0000 (17:43 +0800)]
net/smc: Transitional solution for clcsock race issue
We encountered a crash in smc_setsockopt() and it is caused by
accessing smc->clcsock after clcsock was released.
BUG: kernel NULL pointer dereference, address:
0000000000000020
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT SMP PTI
CPU: 1 PID: 50309 Comm: nginx Kdump: loaded Tainted: G E 5.16.0-rc4+ #53
RIP: 0010:smc_setsockopt+0x59/0x280 [smc]
Call Trace:
<TASK>
__sys_setsockopt+0xfc/0x190
__x64_sys_setsockopt+0x20/0x30
do_syscall_64+0x34/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f16ba83918e
</TASK>
This patch tries to fix it by holding clcsock_release_lock and
checking whether clcsock has already been released before access.
In case that a crash of the same reason happens in smc_getsockopt()
or smc_switch_to_fallback(), this patch also checkes smc->clcsock
in them too. And the caller of smc_switch_to_fallback() will identify
whether fallback succeeds according to the return value.
Fixes:
fd57770dd198 ("net/smc: wait for pending work before clcsock release_sock")
Link: https://lore.kernel.org/lkml/5dd7ffd1-28e2-24cc-9442-1defec27375e@linux.ibm.com/T/
Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
Acked-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sukadev Bhattiprolu [Sat, 22 Jan 2022 02:59:21 +0000 (18:59 -0800)]
ibmvnic: remove unused ->wait_capability
With previous bug fix, ->wait_capability flag is no longer needed and can
be removed.
Fixes:
249168ad07cd ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sukadev Bhattiprolu [Sat, 22 Jan 2022 02:59:20 +0000 (18:59 -0800)]
ibmvnic: don't spin in tasklet
ibmvnic_tasklet() continuously spins waiting for responses to all
capability requests. It does this to avoid encountering an error
during initialization of the vnic. However if there is a bug in the
VIOS and we do not receive a response to one or more queries the
tasklet ends up spinning continuously leading to hard lock ups.
If we fail to receive a message from the VIOS it is reasonable to
timeout the login attempt rather than spin indefinitely in the tasklet.
Fixes:
249168ad07cd ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sukadev Bhattiprolu [Sat, 22 Jan 2022 02:59:19 +0000 (18:59 -0800)]
ibmvnic: init ->running_cap_crqs early
We use ->running_cap_crqs to determine when the ibmvnic_tasklet() should
send out the next protocol message type. i.e when we get back responses
to all our QUERY_CAPABILITY CRQs we send out REQUEST_CAPABILITY crqs.
Similiary, when we get responses to all the REQUEST_CAPABILITY crqs, we
send out the QUERY_IP_OFFLOAD CRQ.
We currently increment ->running_cap_crqs as we send out each CRQ and
have the ibmvnic_tasklet() send out the next message type, when this
running_cap_crqs count drops to 0.
This assumes that all the CRQs of the current type were sent out before
the count drops to 0. However it is possible that we send out say 6 CRQs,
get preempted and receive all the 6 responses before we send out the
remaining CRQs. This can result in ->running_cap_crqs count dropping to
zero before all messages of the current type were sent and we end up
sending the next protocol message too early.
Instead initialize the ->running_cap_crqs upfront so the tasklet will
only send the next protocol message after all responses are received.
Use the cap_reqs local variable to also detect any discrepancy (either
now or in future) in the number of capability requests we actually send.
Currently only send_query_cap() is affected by this behavior (of sending
next message early) since it is called from the worker thread (during
reset) and from application thread (during ->ndo_open()) and they can be
preempted. send_request_cap() is only called from the tasklet which
processes CRQ responses sequentially, is not be affected. But to
maintain the existing symmtery with send_query_capability() we update
send_request_capability() also.
Fixes:
249168ad07cd ("ibmvnic: Make CRQ interrupt tasklet wait for all capabilities crqs")
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sukadev Bhattiprolu [Sat, 22 Jan 2022 02:59:18 +0000 (18:59 -0800)]
ibmvnic: Allow extra failures before disabling
If auto-priority-failover (APF) is enabled and there are at least two
backing devices of different priorities, some resets like fail-over,
change-param etc can cause at least two back to back failovers. (Failover
from high priority backing device to lower priority one and then back
to the higher priority one if that is still functional).
Depending on the timimg of the two failovers it is possible to trigger
a "hard" reset and for the hard reset to fail due to failovers. When this
occurs, the driver assumes that the network is unstable and disables the
VNIC for a 60-second "settling time". This in turn can cause the ethtool
command to fail with "No such device" while the vnic automatically recovers
a little while later.
Given that it's possible to have two back to back failures, allow for extra
failures before disabling the vnic for the settling time.
Fixes:
f15fde9d47b8 ("ibmvnic: delay next reset if hard reset fails")
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
Reviewed-by: Dany Madden <drt@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Sat, 22 Jan 2022 00:57:31 +0000 (16:57 -0800)]
ipv4: fix ip option filtering for locally generated fragments
During IP fragmentation we sanitize IP options. This means overwriting
options which should not be copied with NOPs. Only the first fragment
has the original, full options.
ip_fraglist_prepare() copies the IP header and options from previous
fragment to the next one. Commit
19c3401a917b ("net: ipv4: place control
buffer handling away from fragmentation iterators") moved sanitizing
options before ip_fraglist_prepare() which means options are sanitized
and then overwritten again with the old values.
Fixing this is not enough, however, nor did the sanitization work
prior to aforementioned commit.
ip_options_fragment() (which does the sanitization) uses ipcb->opt.optlen
for the length of the options. ipcb->opt of fragments is not populated
(it's 0), only the head skb has the state properly built. So even when
called at the right time ip_options_fragment() does nothing. This seems
to date back all the way to v2.5.44 when the fast path for pre-fragmented
skbs had been introduced. Prior to that ip_options_build() would have been
called for every fragment (in fact ever since v2.5.44 the fragmentation
handing in ip_options_build() has been dead code, I'll clean it up in
-next).
In the original patch (see Link) caixf mentions fixing the handling
for fragments other than the second one, but I'm not sure how _any_
fragment could have had their options sanitized with the code
as it stood.
Tested with python (MTU on lo lowered to 1000 to force fragmentation):
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.IPPROTO_IP, socket.IP_OPTIONS,
bytearray([7,4,5,192, 20|0x80,4,1,0]))
s.sendto(b'1'*2000, ('127.0.0.1', 1234))
Before:
IP (tos 0x0, ttl 64, id 1053, offset 0, flags [+], proto UDP (17), length 996, options (RR [bad length 4] [bad ptr 5] 192.148.4.1,,RA value 256))
localhost.36500 > localhost.search-agent: UDP, length 2000
IP (tos 0x0, ttl 64, id 1053, offset 968, flags [+], proto UDP (17), length 996, options (RR [bad length 4] [bad ptr 5] 192.148.4.1,,RA value 256))
localhost > localhost: udp
IP (tos 0x0, ttl 64, id 1053, offset 1936, flags [none], proto UDP (17), length 100, options (RR [bad length 4] [bad ptr 5] 192.148.4.1,,RA value 256))
localhost > localhost: udp
After:
IP (tos 0x0, ttl 96, id 42549, offset 0, flags [+], proto UDP (17), length 996, options (RR [bad length 4] [bad ptr 5] 192.148.4.1,,RA value 256))
localhost.51607 > localhost.search-agent: UDP, bad length 2000 > 960
IP (tos 0x0, ttl 96, id 42549, offset 968, flags [+], proto UDP (17), length 996, options (NOP,NOP,NOP,NOP,RA value 256))
localhost > localhost: udp
IP (tos 0x0, ttl 96, id 42549, offset 1936, flags [none], proto UDP (17), length 100, options (NOP,NOP,NOP,NOP,RA value 256))
localhost > localhost: udp
RA (20 | 0x80) is now copied as expected, RR (7) is "NOPed out".
Link: https://lore.kernel.org/netdev/20220107080559.122713-1-ooppublic@163.com/
Fixes:
19c3401a917b ("net: ipv4: place control buffer handling away from fragmentation iterators")
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: caixf <ooppublic@163.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jianguo Wu [Fri, 21 Jan 2022 09:15:31 +0000 (17:15 +0800)]
net-procfs: show net devices bound packet types
After commit:
7866a621043f ("dev: add per net_device packet type chains"),
we can not get packet types that are bound to a specified net device by
/proc/net/ptype, this patch fix the regression.
Run "tcpdump -i ens192 udp -nns0" Before and after apply this patch:
Before:
[root@localhost ~]# cat /proc/net/ptype
Type Device Function
0800 ip_rcv
0806 arp_rcv
86dd ipv6_rcv
After:
[root@localhost ~]# cat /proc/net/ptype
Type Device Function
ALL ens192 tpacket_rcv
0800 ip_rcv
0806 arp_rcv
86dd ipv6_rcv
v1 -> v2:
- fix the regression rather than adding new /proc API as
suggested by Stephen Hemminger.
Fixes:
7866a621043f ("dev: add per net_device packet type chains")
Signed-off-by: Jianguo Wu <wujianguo@chinatelecom.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Fri, 21 Jan 2022 08:25:18 +0000 (16:25 +0800)]
bonding: use rcu_dereference_rtnl when get bonding active slave
bond_option_active_slave_get_rcu() should not be used in rtnl_mutex as it
use rcu_dereference(). Replace to rcu_dereference_rtnl() so we also can use
this function in rtnl protected context.
With this update, we can rmeove the rcu_read_lock/unlock in
bonding .ndo_eth_ioctl and .get_ts_info.
Reported-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Fixes:
94dd016ae538 ("bond: pass get_ts_info and SIOC[SG]HWTSTAMP ioctl to active device")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Marek Behún [Wed, 19 Jan 2022 16:44:55 +0000 (17:44 +0100)]
net: sfp: ignore disabled SFP node
Commit
ce0aa27ff3f6 ("sfp: add sfp-bus to bridge between network devices
and sfp cages") added code which finds SFP bus DT node even if the node
is disabled with status = "disabled". Because of this, when phylink is
created, it ends with non-null .sfp_bus member, even though the SFP
module is not probed (because the node is disabled).
We need to ignore disabled SFP bus node.
Fixes:
ce0aa27ff3f6 ("sfp: add sfp-bus to bridge between network devices and sfp cages")
Signed-off-by: Marek Behún <kabel@kernel.org>
Cc: stable@vger.kernel.org # 2203cbf2c8b5 ("net: sfp: move fwnode parsing into sfp-bus layer")
Signed-off-by: David S. Miller <davem@davemloft.net>
Justin Iurman [Fri, 21 Jan 2022 17:34:49 +0000 (18:34 +0100)]
selftests: net: ioam: expect support for Queue depth data
The IOAM queue-depth data field was added a few weeks ago,
but the test unit was not updated accordingly.
Reported-by: kernel test robot <oliver.sang@intel.com>
Fixes:
b63c5478e9cb ("ipv6: ioam: Support for Queue depth data field")
Signed-off-by: Justin Iurman <justin.iurman@uliege.be>
Link: https://lore.kernel.org/r/20220121173449.26918-1-justin.iurman@uliege.be
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Kees Cook [Fri, 21 Jan 2022 07:39:35 +0000 (23:39 -0800)]
mptcp: Use struct_group() to avoid cross-field memset()
In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memcpy(), memmove(), and memset(), avoid
intentionally writing across neighboring fields.
Use struct_group() to capture the fields to be reset, so that memset()
can be appropriately bounds-checked by the compiler.
Cc: Matthieu Baerts <matthieu.baerts@tessares.net>
Cc: mptcp@lists.linux.dev
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Link: https://lore.kernel.org/r/20220121073935.1154263-1-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
David Howells [Fri, 21 Jan 2022 23:12:58 +0000 (23:12 +0000)]
rxrpc: Adjust retransmission backoff
Improve retransmission backoff by only backing off when we retransmit data
packets rather than when we set the lost ack timer.
To this end:
(1) In rxrpc_resend(), use rxrpc_get_rto_backoff() when setting the
retransmission timer and only tell it that we are retransmitting if we
actually have things to retransmit.
Note that it's possible for the retransmission algorithm to race with
the processing of a received ACK, so we may see no packets needing
retransmission.
(2) In rxrpc_send_data_packet(), don't bump the backoff when setting the
ack_lost_at timer, as it may then get bumped twice.
With this, when looking at one particular packet, the retransmission
intervals were seen to be 1.5ms, 2ms, 3ms, 5ms, 9ms, 17ms, 33ms, 71ms,
136ms, 264ms, 544ms, 1.088s, 2.1s, 4.2s and 8.3s.
Fixes:
c410bf01933e ("rxrpc: Fix the excessive initial retransmission timeout")
Suggested-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.dionne@auristor.com>
Tested-by: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
Link: https://lore.kernel.org/r/164138117069.2023386.17446904856843997127.stgit@warthog.procyon.org.uk/
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov [Fri, 21 Jan 2022 22:14:03 +0000 (14:14 -0800)]
Merge branch 'mvneta: introduce XDP multi-buffer support'
Lorenzo Bianconi says:
====================
This series introduces XDP frags support. The mvneta driver is
the first to support these new "non-linear" xdp_{buff,frame}. Reviewers
please focus on how these new types of xdp_{buff,frame} packets
traverse the different layers and the layout design. It is on purpose
that BPF-helpers are kept simple, as we don't want to expose the
internal layout to allow later changes.
The main idea for the new XDP frags layout is to reuse the same
structure used for non-linear SKB. This rely on the "skb_shared_info"
struct at the end of the first buffer to link together subsequent
buffers. Keeping the layout compatible with SKBs is also done to ease
and speedup creating a SKB from an xdp_{buff,frame}.
Converting xdp_frame to SKB and deliver it to the network stack is shown
in patch 05/18 (e.g. cpumaps).
A frags bit (XDP_FLAGS_HAS_FRAGS) has been introduced in the flags
field of xdp_{buff,frame} structure to notify the bpf/network layer if
this is a non-linear xdp frame (XDP_FLAGS_HAS_FRAGS set) or not
(XDP_FLAGS_HAS_FRAGS not set).
The frags bit will be set by a xdp frags capable driver only
for non-linear frames maintaining the capability to receive linear frames
without any extra cost since the skb_shared_info structure at the end
of the first buffer will be initialized only if XDP_FLAGS_HAS_FRAGS bit
is set. Moreover the flags field in xdp_{buff,frame} will be reused even for
xdp rx csum offloading in future series.
Typical use cases for this series are:
- Jumbo-frames
- Packet header split (please see Google’s use-case @ NetDevConf 0x14, [0])
- TSO/GRO for XDP_REDIRECT
The three following ebpf helpers (and related selftests) has been introduced:
- bpf_xdp_load_bytes:
This helper is provided as an easy way to load data from a xdp buffer. It
can be used to load len bytes from offset from the frame associated to
xdp_md, into the buffer pointed by buf.
- bpf_xdp_store_bytes:
Store len bytes from buffer buf into the frame associated to xdp_md, at
offset.
- bpf_xdp_get_buff_len:
Return the total frame size (linear + paged parts)
bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to take into
account non-linear xdp frames.
Moreover, similar to skb_header_pointer, we introduced bpf_xdp_pointer utility
routine to return a pointer to a given position in the xdp_buff if the
requested area (offset + len) is contained in a contiguous memory area
otherwise it must be copied in a bounce buffer provided by the caller running
bpf_xdp_copy_buf().
BPF_F_XDP_HAS_FRAGS flag has been introduced to notify the kernel the
eBPF program fully support xdp frags.
SEC("xdp.frags"), SEC_DEF("xdp.frags/devmap") and SEC_DEF("xdp.frags/cpumap")
have been introduced to declare xdp frags support.
The NIC driver is expected to reject an eBPF program if it is running in
XDP frags mode and the program does not support XDP frags.
In the same way it is not possible to mix XDP frags and XDP legacy
programs in a CPUMAP/DEVMAP or tailcall a XDP frags/legacy program from
a legacy/frags one.
More info about the main idea behind this approach can be found here [1][2].
Changes since v22:
- remove leftover CHECK macro usage
- reintroduce SEC_XDP_FRAGS flag in sec_def_flags
- rename xdp multi_frags in xdp frags
- do not report xdp_frags support in fdinfo
Changes since v21:
- rename *_mb in *_frags: e.g:
s/xdp_buff_is_mb/xdp_buff_has_frags
- rely on ASSERT_* and not on CHECK in
bpf_xdp_load_bytes/bpf_xdp_store_bytes self-tests
- change new multi.frags SEC definitions to use the following schema:
prog_type.prog_flags/attach_place
- get rid of unnecessary properties in new multi.frags SEC definitions
- rebase on top of bpf-next
Changes since v20:
- rebase to current bpf-next
Changes since v19:
- do not run deprecated bpf_prog_load()
- rely on skb_frag_size_add/skb_frag_size_sub in
bpf_xdp_mb_increase_tail/bpf_xdp_mb_shrink_tail
- rely on sinfo->nr_frags in bpf_xdp_mb_shrink_tail to check if the frame has
been shrunk to a single-buffer one
- allow XDP_REDIRECT of a xdp-mb frame into a CPUMAP
Changes since v18:
- fix bpf_xdp_copy_buf utility routine when we want to load/store data
contained in frag<n>
- add a selftest for bpf_xdp_load_bytes/bpf_xdp_store_bytes when the caller
accesses data contained in frag<n> and frag<n+1>
Changes since v17:
- rework bpf_xdp_copy to squash base and frag management
- remove unused variable in bpf_xdp_mb_shrink_tail()
- move bpf_xdp_copy_buf() out of bpf_xdp_pointer()
- add sanity check for len in bpf_xdp_pointer()
- remove EXPORT_SYMBOL for __xdp_return()
- introduce frag_size field in xdp_rxq_info to let the driver specify max value
for xdp fragments. frag_size set to 0 means the tail increase of last the
fragment is not supported.
Changes since v16:
- do not allow tailcalling a xdp multi-buffer/legacy program from a
legacy/multi-buff one.
- do not allow mixing xdp multi-buffer and xdp legacy programs in a
CPUMAP/DEVMAP
- add selftests for CPUMAP/DEVMAP xdp mb compatibility
- disable XDP_REDIRECT for xdp multi-buff for the moment
- set max offset value to 0xffff in bpf_xdp_pointer
- use ARG_PTR_TO_UNINIT_MEM and ARG_CONST_SIZE for arg3_type and arg4_type
of bpf_xdp_store_bytes/bpf_xdp_load_bytes
Changes since v15:
- let the verifier check buf is not NULL in
bpf_xdp_load_bytes/bpf_xdp_store_bytes helpers
- return an error if offset + length is over frame boundaries in
bpf_xdp_pointer routine
- introduce BPF_F_XDP_MB flag for bpf_attr to notify the kernel the eBPF
program fully supports xdp multi-buffer.
- reject a non XDP multi-buffer program if the driver is running in
XDP multi-buffer mode.
Changes since v14:
- intrudce bpf_xdp_pointer utility routine and
bpf_xdp_load_bytes/bpf_xdp_store_bytes helpers
- drop bpf_xdp_adjust_data helper
- drop xdp_frags_truesize in skb_shared_info
- explode bpf_xdp_mb_adjust_tail in bpf_xdp_mb_increase_tail and
bpf_xdp_mb_shrink_tail
Changes since v13:
- use u32 for xdp_buff/xdp_frame flags field
- rename xdp_frags_tsize in xdp_frags_truesize
- fixed comments
Changes since v12:
- fix bpf_xdp_adjust_data helper for single-buffer use case
- return -EFAULT in bpf_xdp_adjust_{head,tail} in case the data pointers are not
properly reset
- collect ACKs from John
Changes since v11:
- add missing static to bpf_xdp_get_buff_len_proto structure
- fix bpf_xdp_adjust_data helper when offset is smaller than linear area length.
Changes since v10:
- move xdp->data to the requested payload offset instead of to the beginning of
the fragment in bpf_xdp_adjust_data()
Changes since v9:
- introduce bpf_xdp_adjust_data helper and related selftest
- add xdp_frags_size and xdp_frags_tsize fields in skb_shared_info
- introduce xdp_update_skb_shared_info utility routine in ordere to not reset
frags array in skb_shared_info converting from a xdp_buff/xdp_frame to a skb
- simplify bpf_xdp_copy routine
Changes since v8:
- add proper dma unmapping if XDP_TX fails on mvneta for a xdp multi-buff
- switch back to skb_shared_info implementation from previous xdp_shared_info
one
- avoid using a bietfield in xdp_buff/xdp_frame since it introduces performance
regressions. Tested now on 10G NIC (ixgbe) to verify there are no performance
penalties for regular codebase
- add bpf_xdp_get_buff_len helper and remove frame_length field in xdp ctx
- add data_len field in skb_shared_info struct
- introduce XDP_FLAGS_FRAGS_PF_MEMALLOC flag
Changes since v7:
- rebase on top of bpf-next
- fix sparse warnings
- improve comments for frame_length in include/net/xdp.h
Changes since v6:
- the main difference respect to previous versions is the new approach proposed
by Eelco to pass full length of the packet to eBPF layer in XDP context
- reintroduce multi-buff support to eBPF kself-tests
- reintroduce multi-buff support to bpf_xdp_adjust_tail helper
- introduce multi-buffer support to bpf_xdp_copy helper
- rebase on top of bpf-next
Changes since v5:
- rebase on top of bpf-next
- initialize mb bit in xdp_init_buff() and drop per-driver initialization
- drop xdp->mb initialization in xdp_convert_zc_to_xdp_frame()
- postpone introduction of frame_length field in XDP ctx to another series
- minor changes
Changes since v4:
- rebase ontop of bpf-next
- introduce xdp_shared_info to build xdp multi-buff instead of using the
skb_shared_info struct
- introduce frame_length in xdp ctx
- drop previous bpf helpers
- fix bpf_xdp_adjust_tail for xdp multi-buff
- introduce xdp multi-buff self-tests for bpf_xdp_adjust_tail
- fix xdp_return_frame_bulk for xdp multi-buff
Changes since v3:
- rebase ontop of bpf-next
- add patch 10/13 to copy back paged data from a xdp multi-buff frame to
userspace buffer for xdp multi-buff selftests
Changes since v2:
- add throughput measurements
- drop bpf_xdp_adjust_mb_header bpf helper
- introduce selftest for xdp multibuffer
- addressed comments on bpf_xdp_get_frags_count
- introduce xdp multi-buff support to cpumaps
Changes since v1:
- Fix use-after-free in xdp_return_{buff/frame}
- Introduce bpf helpers
- Introduce xdp_mb sample program
- access skb_shared_info->nr_frags only on the last fragment
Changes since RFC:
- squash multi-buffer bit initialization in a single patch
- add mvneta non-linear XDP buff support for tx side
[0] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[2] https://netdevconf.info/0x14/session.html?tutorial-add-XDP-support-to-a-NIC-driver (XDPmulti-buffers section)
Eelco Chaudron (3):
bpf: add frags support to the bpf_xdp_adjust_tail() API
bpf: add frags support to xdp copy helpers
bpf: selftests: update xdp_adjust_tail selftest to include xdp frags
Lorenzo Bianconi (19):
net: skbuff: add size metadata to skb_shared_info for xdp
xdp: introduce flags field in xdp_buff/xdp_frame
net: mvneta: update frags bit before passing the xdp buffer to eBPF
layer
net: mvneta: simplify mvneta_swbm_add_rx_fragment management
net: xdp: add xdp_update_skb_shared_info utility routine
net: marvell: rely on xdp_update_skb_shared_info utility routine
xdp: add frags support to xdp_return_{buff/frame}
net: mvneta: add frags support to XDP_TX
bpf: introduce BPF_F_XDP_HAS_FRAGS flag in prog_flags loading the ebpf
program
net: mvneta: enable jumbo frames if the loaded XDP program support
frags
bpf: introduce bpf_xdp_get_buff_len helper
bpf: move user_size out of bpf_test_init
bpf: introduce frags support to bpf_prog_test_run_xdp()
bpf: test_run: add xdp_shared_info pointer in bpf_test_finish
signature
libbpf: Add SEC name for xdp frags programs
net: xdp: introduce bpf_xdp_pointer utility routine
bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest
bpf: selftests: add CPUMAP/DEVMAP selftests for xdp frags
xdp: disable XDP_REDIRECT for xdp frags
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:10:06 +0000 (11:10 +0100)]
xdp: disable XDP_REDIRECT for xdp frags
XDP_REDIRECT is not fully supported yet for xdp frags since not
all XDP capable drivers can map non-linear xdp_frame in ndo_xdp_xmit
so disable it for the moment.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/0da25e117d0e2673f5d0ce6503393c55c6fb1be9.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:10:05 +0000 (11:10 +0100)]
bpf: selftests: add CPUMAP/DEVMAP selftests for xdp frags
Verify compatibility checks attaching a XDP frags program to a
CPUMAP/DEVMAP
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/d94b4d35adc1e42c9ca5004e6b2cdfd75992304d.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:10:04 +0000 (11:10 +0100)]
bpf: selftests: introduce bpf_xdp_{load,store}_bytes selftest
Introduce kernel selftest for new bpf_xdp_{load,store}_bytes helpers.
and bpf_xdp_pointer/bpf_xdp_copy_buf utility routines.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/2c99ae663a5dcfbd9240b1d0489ad55dea4f4601.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:10:03 +0000 (11:10 +0100)]
net: xdp: introduce bpf_xdp_pointer utility routine
Similar to skb_header_pointer, introduce bpf_xdp_pointer utility routine
to return a pointer to a given position in the xdp_buff if the requested
area (offset + len) is contained in a contiguous memory area otherwise it
will be copied in a bounce buffer provided by the caller.
Similar to the tc counterpart, introduce the two following xdp helpers:
- bpf_xdp_load_bytes
- bpf_xdp_store_bytes
Reviewed-by: Eelco Chaudron <echaudro@redhat.com>
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/ab285c1efdd5b7a9d361348b1e7d3ef49f6382b3.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Toke Hoiland-Jorgensen [Fri, 21 Jan 2022 10:10:02 +0000 (11:10 +0100)]
bpf: generalise tail call map compatibility check
The check for tail call map compatibility ensures that tail calls only
happen between maps of the same type. To ensure backwards compatibility for
XDP frags we need a similar type of check for cpumap and devmap
programs, so move the state from bpf_array_aux into bpf_map, add
xdp_has_frags to the check, and apply the same check to cpumap and devmap.
Acked-by: John Fastabend <john.fastabend@gmail.com>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Link: https://lore.kernel.org/r/f19fd97c0328a39927f3ad03e1ca6b43fd53cdfd.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:10:01 +0000 (11:10 +0100)]
libbpf: Add SEC name for xdp frags programs
Introduce support for the following SEC entries for XDP frags
property:
- SEC("xdp.frags")
- SEC("xdp.frags/devmap")
- SEC("xdp.frags/cpumap")
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/af23b6e4841c171ad1af01917839b77847a4bc27.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Eelco Chaudron [Fri, 21 Jan 2022 10:10:00 +0000 (11:10 +0100)]
bpf: selftests: update xdp_adjust_tail selftest to include xdp frags
This change adds test cases for the xdp frags scenarios when shrinking
and growing.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Link: https://lore.kernel.org/r/d2e6a0ebc52db6f89e62b9befe045032e5e0a5fe.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:59 +0000 (11:09 +0100)]
bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature
introduce xdp_shared_info pointer in bpf_test_finish signature in order
to copy back paged data from a xdp frags frame to userspace buffer
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/c803673798c786f915bcdd6c9338edaa9740d3d6.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:58 +0000 (11:09 +0100)]
bpf: introduce frags support to bpf_prog_test_run_xdp()
Introduce the capability to allocate a xdp frags in
bpf_prog_test_run_xdp routine. This is a preliminary patch to
introduce the selftests for new xdp frags ebpf helpers
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/b7c0e425a9287f00f601c4fc0de54738ec6ceeea.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:57 +0000 (11:09 +0100)]
bpf: move user_size out of bpf_test_init
Rely on data_size_in in bpf_test_init routine signature. This is a
preliminary patch to introduce xdp frags selftest
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/6b48d38ed3d60240d7d6bb15e6fa7fabfac8dfb2.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Eelco Chaudron [Fri, 21 Jan 2022 10:09:56 +0000 (11:09 +0100)]
bpf: add frags support to xdp copy helpers
This patch adds support for frags for the following helpers:
- bpf_xdp_output()
- bpf_perf_event_output()
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/340b4a99cdc24337b40eaf8bb597f9f9e7b0373e.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Eelco Chaudron [Fri, 21 Jan 2022 10:09:55 +0000 (11:09 +0100)]
bpf: add frags support to the bpf_xdp_adjust_tail() API
This change adds support for tail growing and shrinking for XDP frags.
When called on a non-linear packet with a grow request, it will work
on the last fragment of the packet. So the maximum grow size is the
last fragments tailroom, i.e. no new buffer will be allocated.
A XDP frags capable driver is expected to set frag_size in xdp_rxq_info
data structure to notify the XDP core the fragment size.
frag_size set to 0 is interpreted by the XDP core as tail growing is
not allowed.
Introduce __xdp_rxq_info_reg utility routine to initialize frag_size field.
When shrinking, it will work from the last fragment, all the way down to
the base buffer depending on the shrinking size. It's important to mention
that once you shrink down the fragment(s) are freed, so you can not grow
again to the original size.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Co-developed-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Link: https://lore.kernel.org/r/eabda3485dda4f2f158b477729337327e609461d.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:54 +0000 (11:09 +0100)]
bpf: introduce bpf_xdp_get_buff_len helper
Introduce bpf_xdp_get_buff_len helper in order to return the xdp buffer
total size (linear and paged area)
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/aac9ac3504c84026cf66a3c71b7c5ae89bc991be.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:53 +0000 (11:09 +0100)]
net: mvneta: enable jumbo frames if the loaded XDP program support frags
Enable the capability to receive jumbo frames even if the interface is
running in XDP mode if the loaded program declare to properly support
xdp frags. At same time reject a xdp program not supporting xdp frags
if the driver is running in xdp frags mode.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/6909f81a3cbb8fb6b88e914752c26395771b882a.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:52 +0000 (11:09 +0100)]
bpf: introduce BPF_F_XDP_HAS_FRAGS flag in prog_flags loading the ebpf program
Introduce BPF_F_XDP_HAS_FRAGS and the related field in bpf_prog_aux
in order to notify the driver the loaded program support xdp frags.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/db2e8075b7032a356003f407d1b0deb99adaa0ed.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:51 +0000 (11:09 +0100)]
net: mvneta: add frags support to XDP_TX
Introduce the capability to map non-linear xdp buffer running
mvneta_xdp_submit_frame() for XDP_TX and XDP_REDIRECT
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/5d46ab63870ffe96fb95e6075a7ff0c81ef6424d.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:50 +0000 (11:09 +0100)]
xdp: add frags support to xdp_return_{buff/frame}
Take into account if the received xdp_buff/xdp_frame is non-linear
recycling/returning the frame memory to the allocator or into
xdp_frame_bulk.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/a961069febc868508ce1bdf5e53a343eb4e57cb2.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:49 +0000 (11:09 +0100)]
net: marvell: rely on xdp_update_skb_shared_info utility routine
Rely on xdp_update_skb_shared_info routine in order to avoid
resetting frags array in skb_shared_info structure building
the skb in mvneta_swbm_build_skb(). Frags array is expected to
be initialized by the receiving driver building the xdp_buff
and here we just need to update memory metadata.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/e0dad97f5d02b13f189f99f1e5bc8e61bef73412.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:48 +0000 (11:09 +0100)]
net: xdp: add xdp_update_skb_shared_info utility routine
Introduce xdp_update_skb_shared_info routine to update frags array
metadata in skb_shared_info data structure converting to a skb from
a xdp_buff or xdp_frame.
According to the current skb_shared_info architecture in
xdp_frame/xdp_buff and to the xdp frags support, there is
no need to run skb_add_rx_frag() and reset frags array converting the buffer
to a skb since the frag array will be in the same position for xdp_buff/xdp_frame
and for the skb, we just need to update memory metadata.
Introduce XDP_FLAGS_PF_MEMALLOC flag in xdp_buff_flags in order to mark
the xdp_buff or xdp_frame as under memory-pressure if pages of the frags array
are under memory pressure. Doing so we can avoid looping over all fragments in
xdp_update_skb_shared_info routine. The driver is expected to set the
flag constructing the xdp_buffer using xdp_buff_set_frag_pfmemalloc
utility routine.
Rely on xdp_update_skb_shared_info in __xdp_build_skb_from_frame routine
converting the non-linear xdp_frame to a skb after performing a XDP_REDIRECT.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/bfd23fb8a8d7438724f7819c567cdf99ffd6226f.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:47 +0000 (11:09 +0100)]
net: mvneta: simplify mvneta_swbm_add_rx_fragment management
Relying on xdp frags bit, remove skb_shared_info structure
allocated on the stack in mvneta_rx_swbm routine and simplify
mvneta_swbm_add_rx_fragment accessing skb_shared_info in the
xdp_buff structure directly. There is no performance penalty in
this approach since mvneta_swbm_add_rx_fragment is run just
for xdp frags use-case.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/45f050c094ccffce49d6bc5112939ed35250ba90.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:46 +0000 (11:09 +0100)]
net: mvneta: update frags bit before passing the xdp buffer to eBPF layer
Update frags bit (XDP_FLAGS_HAS_FRAGS) in xdp_buff to notify
XDP/eBPF layer and XDP remote drivers if this is a "non-linear"
XDP buffer. Access skb_shared_info only if XDP_FLAGS_HAS_FRAGS flag
is set in order to avoid possible cache-misses.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/c00a73097f8a35860d50dae4a36e6cc9ef7e172f.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:45 +0000 (11:09 +0100)]
xdp: introduce flags field in xdp_buff/xdp_frame
Introduce flags field in xdp_frame and xdp_buffer data structures
to define additional buffer features. At the moment the only
supported buffer feature is frags bit (XDP_FLAGS_HAS_FRAGS).
frags bit is used to specify if this is a linear buffer
(XDP_FLAGS_HAS_FRAGS not set) or a frags frame (XDP_FLAGS_HAS_FRAGS
set). In the latter case the driver is expected to initialize the
skb_shared_info structure at the end of the first buffer to link together
subsequent buffers belonging to the same frame.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/e389f14f3a162c0a5bc6a2e1aa8dd01a90be117d.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 21 Jan 2022 10:09:44 +0000 (11:09 +0100)]
net: skbuff: add size metadata to skb_shared_info for xdp
Introduce xdp_frags_size field in skb_shared_info data structure
to store xdp_buff/xdp_frame frame paged size (xdp_frags_size will
be used in xdp frags support). In order to not increase
skb_shared_info size we will use a hole due to skb_shared_info
alignment.
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/8a849819a3e0a143d540f78a3a5add76e17e980d.1642758637.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
David S. Miller [Fri, 21 Jan 2022 14:32:21 +0000 (14:32 +0000)]
Merge branch 'octeontx2-af-fixes'
Subbaraya Sundeep says:
====================
octeontx-af2: Fixes for CN10K and CN9xxx platforms
This patchset has consolidated fixes in Octeontx2 driver
handling CN10K and CN9xxx platforms. When testing the
new CN10K hardware some issues resurfaced like accessing
wrong register for CN10K and enabling loopback on not supported
interfaces. Some fixes are needed for CN9xxx platforms as well.
Below is the description of patches
Patch 1: AF sets RX RSS action for all the VFs when a VF is
brought up. But when a PF sets RX action for its VF like Drop/Direct
to a queue in ntuple filter it is not retained because of AF fixup.
This patch skips modifying VF RX RSS action if PF has already
set its action.
Patch 2: When configuring backpressure wrong register is being read for
LBKs hence fixed it.
Patch 3: Some RVU blocks may take longer time to reset but are guaranteed
to complete the reset. Hence wait till reset is complete.
Patch 4: For enabling LMAC CN10K needs another register compared
to CN9xxx platforms. Hence changed it.
Patch 5: Adds missing barrier before submitting memory pointer
to the aura hardware.
Patch 6: Increase polling time while link credit restore and also
return proper error code when timeout occurs.
Patch 7: Internal loopback not supported on LPCS interfaces like
SGMII/QSGMII so do not enable it.
Patch 8: When there is a error in message processing, AF sets the error
response and replies back to requestor. PF forwards a invalid message to
VF back if AF reply has error in it. This way VF lacks the actual error set
by AF for its message. This is changed such that PF simply forwards the
actual reply and let VF handle the error.
Patch 9: ntuple filter with "flow-type ether proto 0x8842 vlan 0x92e"
was not working since ethertype 0x8842 is NGIO protocol. Hardware
parser explicitly parses such NGIO packets and sets the packet as
NGIO and do not set it as tagged packet. Fix this by changing parser
such that it sets the packet as both NGIO and tagged by using
separate layer types.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>