Björn Töpel [Mon, 8 Mar 2021 11:29:07 +0000 (12:29 +0100)]
bpf, xdp: Restructure redirect actions
The XDP_REDIRECT implementations for maps and non-maps are fairly
similar, but obviously need to take different code paths depending on
if the target is using a map or not. Today, the redirect targets for
XDP either uses a map, or is based on ifindex.
Here, the map type and id are added to bpf_redirect_info, instead of
the actual map. Map type, map item/ifindex, and the map_id (if any) is
passed to xdp_do_redirect().
For ifindex-based redirect, used by the bpf_redirect() XDP BFP helper,
a special map type/id are used. Map type of UNSPEC together with map id
equal to INT_MAX has the special meaning of an ifindex based
redirect. Note that valid map ids are 1 inclusive, INT_MAX exclusive
([1,INT_MAX[).
In addition to making the code easier to follow, using explicit type
and id in bpf_redirect_info has a slight positive performance impact
by avoiding a pointer indirection for the map type lookup, and instead
use the cacheline for bpf_redirect_info.
Since the actual map is not passed via bpf_redirect_info anymore, the
map lookup is only done in the BPF helper. This means that the
bpf_clear_redirect_map() function can be removed. The actual map item
is RCU protected.
The bpf_redirect_info flags member is not used by XDP, and not
read/written any more. The map member is only written to when
required/used, and not unconditionally.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210308112907.559576-3-bjorn.topel@gmail.com
Björn Töpel [Mon, 8 Mar 2021 11:29:06 +0000 (12:29 +0100)]
bpf, xdp: Make bpf_redirect_map() a map operation
Currently the bpf_redirect_map() implementation dispatches to the
correct map-lookup function via a switch-statement. To avoid the
dispatching, this change adds bpf_redirect_map() as a map
operation. Each map provides its bpf_redirect_map() version, and
correct function is automatically selected by the BPF verifier.
A nice side-effect of the code movement is that the map lookup
functions are now local to the map implementation files, which removes
one additional function call.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210308112907.559576-2-bjorn.topel@gmail.com
Andrii Nakryiko [Tue, 9 Mar 2021 04:43:22 +0000 (20:43 -0800)]
selftests/bpf: Fix compiler warning in BPF_KPROBE definition in loop6.c
Add missing return type to BPF_KPROBE definition. Without it, compiler
generates the following warning:
progs/loop6.c:68:12: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
BPF_KPROBE(trace_virtqueue_add_sgs, void *unused, struct scatterlist **sgs,
^
1 warning generated.
Fixes:
86a35af628e5 ("selftests/bpf: Add a verifier scale test with unknown bounded loop")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210309044322.3487636-1-andrii@kernel.org
Alexei Starovoitov [Tue, 9 Mar 2021 18:59:46 +0000 (10:59 -0800)]
Merge branch 'Add clang-based BTF_KIND_FLOAT tests'
Ilya Leoshkevich says:
====================
The two tests here did not make it into the main BTF_KIND_FLOAT series
because of dependency on LLVM.
v0: https://lore.kernel.org/bpf/
20210226202256.116518-10-iii@linux.ibm.com/
v1 -> v0: Per Alexei's suggestion, document the required LLVM commit.
v1: https://lore.kernel.org/bpf/
20210305170844.151594-1-iii@linux.ibm.com/
v1 -> v2: Per Andrii's suggestions, use double in
core_reloc_size___diff_sz and non-pointer member in
btf_dump_test_case_syntax.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Ilya Leoshkevich [Tue, 9 Mar 2021 00:56:49 +0000 (01:56 +0100)]
selftests/bpf: Add BTF_KIND_FLOAT to btf_dump_test_case_syntax
Check that dumping various floating-point types produces a valid C
code.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210309005649.162480-3-iii@linux.ibm.com
Ilya Leoshkevich [Tue, 9 Mar 2021 00:56:48 +0000 (01:56 +0100)]
selftests/bpf: Add BTF_KIND_FLOAT to test_core_reloc_size
Verify that bpf_core_field_size() is working correctly with floats.
Also document the required clang version.
Suggested-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210309005649.162480-2-iii@linux.ibm.com
Jean-Philippe Brucker [Mon, 8 Mar 2021 18:28:31 +0000 (19:28 +0100)]
selftests/bpf: Fix typo in Makefile
The selftest build fails when trying to install the scripts:
rsync: [sender] link_stat "tools/testing/selftests/bpf/test_docs_build.sh" failed: No such file or directory (2)
Fix the filename.
Fixes:
a01d935b2e09 ("tools/bpf: Remove bpf-helpers from bpftool docs")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210308182830.155784-1-jean-philippe@linaro.org
Jean-Philippe Brucker [Mon, 8 Mar 2021 18:25:22 +0000 (19:25 +0100)]
libbpf: Fix arm64 build
The macro for libbpf_smp_store_release() doesn't build on arm64, fix it.
Fixes:
291471dd1559 ("libbpf, xsk: Add libbpf_smp_store_release libbpf_smp_load_acquire")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210308182521.155536-1-jean-philippe@linaro.org
Andrii Nakryiko [Mon, 8 Mar 2021 16:51:01 +0000 (08:51 -0800)]
Merge branch 'load-acquire/store-release barriers for'
Björn Töpel says:
====================
This two-patch series introduces load-acquire/store-release barriers
for the AF_XDP rings.
For most contemporary architectures, this is more effective than a
SPSC ring based on smp_{r,w,}mb() barriers. More importantly,
load-acquire/store-release semantics make the ring code easier to
follow.
This is effectively the change done in commit
6c43c091bdc5
("documentation: Update circular buffer for
load-acquire/store-release"), but for the AF_XDP rings.
Both libbpf and the kernel-side are updated.
Full details are outlined in the commits!
Thanks to the LKMM-folks (Paul/Alan/Will) for helping me out in this
complicated matter!
Changelog
v1[1]->v2:
* Expanded the commit message for patch 1, and included the LKMM
litmus tests. Hopefully this clear things up. (Daniel)
* Clarified why the smp_mb()/smp_load_acquire() is not needed in (A);
control dependency with load to store. (Toke)
[1] https://lore.kernel.org/bpf/
20210301104318.263262-1-bjorn.topel@gmail.com/
Thanks,
Björn
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Björn Töpel [Fri, 5 Mar 2021 09:41:13 +0000 (10:41 +0100)]
libbpf, xsk: Add libbpf_smp_store_release libbpf_smp_load_acquire
Now that the AF_XDP rings have load-acquire/store-release semantics,
move libbpf to that as well.
The library-internal libbpf_smp_{load_acquire,store_release} are only
valid for 32-bit words on ARM64.
Also, remove the barriers that are no longer in use.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210305094113.413544-3-bjorn.topel@gmail.com
Björn Töpel [Fri, 5 Mar 2021 09:41:12 +0000 (10:41 +0100)]
xsk: Update rings for load-acquire/store-release barriers
Currently, the AF_XDP rings uses general smp_{r,w,}mb() barriers on
the kernel-side. On most modern architectures
load-acquire/store-release barriers perform better, and results in
simpler code for circular ring buffers.
This change updates the XDP socket rings to use
load-acquire/store-release barriers.
It is important to note that changing from the old smp_{r,w,}mb()
barriers, to load-acquire/store-release barriers does not break
compatibility. The old semantics work with the new one, and vice
versa.
As pointed out by "Documentation/memory-barriers.txt" in the "SMP
BARRIER PAIRING" section:
"General barriers pair with each other, though they also pair with
most other types of barriers, albeit without multicopy atomicity.
An acquire barrier pairs with a release barrier, but both may also
pair with other barriers, including of course general barriers."
How different barriers behaves and pairs is outlined in
"tools/memory-model/Documentation/cheatsheet.txt".
In order to make sure that compatibility is not broken, LKMM herd7
based litmus tests can be constructed and verified.
We generalize the XDP socket ring to a one entry ring, and create two
scenarios; One where the ring is full, where only the consumer can
proceed, followed by the producer. One where the ring is empty, where
only the producer can proceed, followed by the consumer. Each scenario
is then expanded to four different tests: general producer/general
consumer, general producer/acqrel consumer, acqrel producer/general
consumer, acqrel producer/acqrel consumer. In total eight tests.
The empty ring test:
C spsc-rb+empty
// Simple one entry ring:
// prod cons allowed action prod cons
// 0 0 => prod => 1 0
// 0 1 => cons => 0 0
// 1 0 => cons => 1 1
// 1 1 => prod => 0 1
{}
// We start at prod==0, cons==0, data==0, i.e. nothing has been
// written to the ring. From here only the producer can start, and
// should write 1. Afterwards, consumer can continue and read 1 to
// data. Can we enter state prod==1, cons==1, but consumer observed
// the incorrect value of 0?
P0(int *prod, int *cons, int *data)
{
... producer
}
P1(int *prod, int *cons, int *data)
{
... consumer
}
exists( 1:d=0 /\ prod=1 /\ cons=1 );
The full ring test:
C spsc-rb+full
// Simple one entry ring:
// prod cons allowed action prod cons
// 0 0 => prod => 1 0
// 0 1 => cons => 0 0
// 1 0 => cons => 1 1
// 1 1 => prod => 0 1
{ prod = 1; }
// We start at prod==1, cons==0, data==1, i.e. producer has
// written 0, so from here only the consumer can start, and should
// consume 0. Afterwards, producer can continue and write 1 to
// data. Can we enter state prod==0, cons==1, but consumer observed
// the write of 1?
P0(int *prod, int *cons, int *data)
{
... producer
}
P1(int *prod, int *cons, int *data)
{
... consumer
}
exists( 1:d=1 /\ prod=0 /\ cons=1 );
where P0 and P1 are:
P0(int *prod, int *cons, int *data)
{
int p;
p = READ_ONCE(*prod);
if (READ_ONCE(*cons) == p) {
WRITE_ONCE(*data, 1);
smp_wmb();
WRITE_ONCE(*prod, p ^ 1);
}
}
P0(int *prod, int *cons, int *data)
{
int p;
p = READ_ONCE(*prod);
if (READ_ONCE(*cons) == p) {
WRITE_ONCE(*data, 1);
smp_store_release(prod, p ^ 1);
}
}
P1(int *prod, int *cons, int *data)
{
int c;
int d = -1;
c = READ_ONCE(*cons);
if (READ_ONCE(*prod) != c) {
smp_rmb();
d = READ_ONCE(*data);
smp_mb();
WRITE_ONCE(*cons, c ^ 1);
}
}
P1(int *prod, int *cons, int *data)
{
int c;
int d = -1;
c = READ_ONCE(*cons);
if (smp_load_acquire(prod) != c) {
d = READ_ONCE(*data);
smp_store_release(cons, c ^ 1);
}
}
The full LKMM litmus tests are found at [1].
On x86-64 systems the l2fwd AF_XDP xdpsock sample performance
increases by 1%. This is mostly due to that the smp_mb() is removed,
which is a relatively expensive operation on these
platforms. Weakly-ordered platforms, such as ARM64 might benefit even
more.
[1] https://github.com/bjoto/litmus-xsk
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210305094113.413544-2-bjorn.topel@gmail.com
Jiri Olsa [Fri, 5 Mar 2021 13:40:50 +0000 (14:40 +0100)]
selftests/bpf: Fix test_attach_probe for powerpc uprobes
When testing uprobes we the test gets GEP (Global Entry Point)
address from kallsyms, but then the function is called locally
so the uprobe is not triggered.
Fixing this by adjusting the address to LEP (Local Entry Point)
for powerpc arch plus instruction check stolen from ppc_function_entry
function pointed out and explained by Michael and Naveen.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Link: https://lore.kernel.org/bpf/20210305134050.139840-1-jolsa@kernel.org
Xuesen Huang [Fri, 5 Mar 2021 12:33:47 +0000 (20:33 +0800)]
selftests, bpf: Extend test_tc_tunnel test with vxlan
Add BPF_F_ADJ_ROOM_ENCAP_L2_ETH flag to the existing tests which
encapsulates the ethernet as the inner l2 header.
Update a vxlan encapsulation test case.
Signed-off-by: Xuesen Huang <huangxuesen@kuaishou.com>
Signed-off-by: Li Wang <wangli09@kuaishou.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/20210305123347.15311-1-hxseverything@gmail.com
Xuesen Huang [Thu, 4 Mar 2021 06:40:46 +0000 (14:40 +0800)]
bpf: Add bpf_skb_adjust_room flag BPF_F_ADJ_ROOM_ENCAP_L2_ETH
bpf_skb_adjust_room sets the inner_protocol as skb->protocol for packets
encapsulation. But that is not appropriate when pushing Ethernet header.
Add an option to further specify encap L2 type and set the inner_protocol
as ETH_P_TEB.
Suggested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Xuesen Huang <huangxuesen@kuaishou.com>
Signed-off-by: Zhiyong Cheng <chengzhiyong@kuaishou.com>
Signed-off-by: Li Wang <wangli09@kuaishou.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/bpf/20210304064046.6232-1-hxseverything@gmail.com
Jiapeng Chong [Wed, 3 Mar 2021 07:52:10 +0000 (15:52 +0800)]
selftests/bpf: Simplify the calculation of variables
Fix the following coccicheck warnings:
./tools/testing/selftests/bpf/test_sockmap.c:735:35-37: WARNING !A || A
&& B is equivalent to !A || B.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/1614757930-17197-1-git-send-email-jiapeng.chong@linux.alibaba.com
Jiapeng Chong [Wed, 3 Mar 2021 07:20:35 +0000 (15:20 +0800)]
bpf: Simplify the calculation of variables
Fix the following coccicheck warnings:
./tools/bpf/bpf_dbg.c:1201:55-57: WARNING !A || A && B is equivalent to
!A || B.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/1614756035-111280-1-git-send-email-jiapeng.chong@linux.alibaba.com
Alexei Starovoitov [Fri, 5 Mar 2021 03:11:30 +0000 (19:11 -0800)]
Merge branch 'PROG_TEST_RUN support for sk_lookup programs'
Lorenz Bauer says:
====================
We don't have PROG_TEST_RUN support for sk_lookup programs at the
moment. So far this hasn't been a problem, since we can run our
tests in a separate network namespace. For benchmarking it's nice
to have PROG_TEST_RUN, so I've gone and implemented it.
Based on discussion on the v1 I've dropped support for testing multiple
programs at once.
Changes since v3:
- Use bpf_test_timer prefix (Andrii)
Changes since v2:
- Fix test_verifier failure (Alexei)
Changes since v1:
- Add sparse annotations to the t_* functions
- Add appropriate type casts in bpf_prog_test_run_sk_lookup
- Drop running multiple programs
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenz Bauer [Wed, 3 Mar 2021 10:18:16 +0000 (10:18 +0000)]
selftests: bpf: Don't run sk_lookup in verifier tests
sk_lookup doesn't allow setting data_in for bpf_prog_run. This doesn't
play well with the verifier tests, since they always set a 64 byte
input buffer. Allow not running verifier tests by setting
bpf_test.runs to a negative value and don't run the ctx access case
for sk_lookup. We have dedicated ctx access tests so skipping here
doesn't reduce coverage.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210303101816.36774-6-lmb@cloudflare.com
Lorenz Bauer [Wed, 3 Mar 2021 10:18:15 +0000 (10:18 +0000)]
selftests: bpf: Check that PROG_TEST_RUN repeats as requested
Extend a simple prog_run test to check that PROG_TEST_RUN adheres
to the requested repetitions. Convert it to use BPF skeleton.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210303101816.36774-5-lmb@cloudflare.com
Lorenz Bauer [Wed, 3 Mar 2021 10:18:14 +0000 (10:18 +0000)]
selftests: bpf: Convert sk_lookup ctx access tests to PROG_TEST_RUN
Convert the selftests for sk_lookup narrow context access to use
PROG_TEST_RUN instead of creating actual sockets. This ensures that
ctx is populated correctly when using PROG_TEST_RUN.
Assert concrete values since we now control remote_ip and remote_port.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210303101816.36774-4-lmb@cloudflare.com
Lorenz Bauer [Wed, 3 Mar 2021 10:18:13 +0000 (10:18 +0000)]
bpf: Add PROG_TEST_RUN support for sk_lookup programs
Allow to pass sk_lookup programs to PROG_TEST_RUN. User space
provides the full bpf_sk_lookup struct as context. Since the
context includes a socket pointer that can't be exposed
to user space we define that PROG_TEST_RUN returns the cookie
of the selected socket or zero in place of the socket pointer.
We don't support testing programs that select a reuseport socket,
since this would mean running another (unrelated) BPF program
from the sk_lookup test handler.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210303101816.36774-3-lmb@cloudflare.com
Lorenz Bauer [Wed, 3 Mar 2021 10:18:12 +0000 (10:18 +0000)]
bpf: Consolidate shared test timing code
Share the timing / signal interruption logic between different
implementations of PROG_TEST_RUN. There is a change in behaviour
as well. We check the loop exit condition before checking for
pending signals. This resolves an edge case where a signal
arrives during the last iteration. Instead of aborting with
EINTR we return the successful result to user space.
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210303101816.36774-2-lmb@cloudflare.com
Alexei Starovoitov [Fri, 5 Mar 2021 02:39:46 +0000 (18:39 -0800)]
Merge branch 'Improve BPF syscall command documentation'
Joe Stringer says:
====================
The state of bpf(2) manual pages today is not exactly ideal. For the
most part, it was written several years ago and has not kept up with the
pace of development in the kernel tree. For instance, out of a total of
~35 commands to the BPF syscall available today, when I pull the
kernel-man-pages tree today I find just 6 documented commands: The very
basics of map interaction and program load.
In contrast, looking at bpf-helpers(7), I am able today to run one
command[0] to fetch API documentation of the very latest eBPF helpers
that have been added to the kernel. This documentation is up to date
because kernel maintainers enforce documenting the APIs as part of
the feature submission process. As far as I can tell, we rely on manual
synchronization from the kernel tree to the kernel-man-pages tree to
distribute these more widely, so all locations may not be completely up
to date. That said, the documentation does in fact exist in the first
place which is a major initial hurdle to overcome.
Given the relative success of the process around bpf-helpers(7) to
encourage developers to document their user-facing changes, in this
patch series I explore applying this technique to bpf(2) as well.
Unfortunately, even with bpf(2) being so out-of-date, there is still a
lot of content to convert over. In particular, the following aspects of
the bpf syscall could also be individually be generated from separate
documentation in the header:
* BPF syscall commands
* BPF map types
* BPF program types
* BPF program subtypes (aka expected_attach_type)
* BPF attachment points
Rather than tackle everything at once, I have focused in this series on
the syscall commands, "enum bpf_cmd". This series is structured to first
import what useful descriptions there are from the kernel-man-pages
tree, then piece-by-piece document a few of the syscalls in more detail
in cases where I could find useful documentation from the git tree or
from a casual read of the code. Not all documentation is comprehensive
at this point, but a basis is provided with examples that can be further
enhanced with subsequent follow-up patches. Note, the series in its
current state only includes documentation around the syscall commands
themselves, so in the short term it doesn't allow us to automate bpf(2)
generation; Only one section of the man page could be replaced. Though
if there is appetite for this approach, this should be trivial to
improve on, even if just by importing the remaining static text from the
kernel-man-pages tree.
Following that, the series enhances the python scripting around parsing
the descriptions from the header files and generating dedicated
ReStructured Text and troff output. Finally, to expose the new text and
reduce the likelihood of having it get out of date or break the docs
parser, it is added to the selftests and exposed through the kernel
documentation web pages.
The eventual goal of this effort would be to extend the kernel UAPI
headers such that each of the categories I had listed above (commands,
maps, progs, hooks) have dedicated documentation in the kernel tree, and
that developers must update the comments in the headers to document the
APIs prior to patch acceptance, and that we could auto-generate the
latest version of the bpf(2) manual pages based on a few static
description sections combined with the dynamically-generated output from
the header.
This patch series can also be found at the following location on GitHub:
https://github.com/joestringer/linux/tree/submit/bpf-command-docs_v2
Thanks also to Quentin Monnet for initial review.
[0]: make -C tools/testing/selftests/bpf docs
v2:
* Remove build infrastructure in favor of kernel-doc directives
* Shift userspace-api docs under Documentation/userspace-api/ebpf
* Fix scripts/bpf_doc.py syscall --header (throw unsupported error)
* Improve .gitignore handling of newly autogenerated files
====================
Acked-by: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Joe Stringer [Tue, 2 Mar 2021 17:19:47 +0000 (09:19 -0800)]
tools: Sync uapi bpf.h header with latest changes
Synchronize the header after all of the recent changes.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-16-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:46 +0000 (09:19 -0800)]
docs/bpf: Add bpf() syscall command reference
Generate the syscall command reference from the UAPI header file and
include it in the main bpf docs page.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-15-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:45 +0000 (09:19 -0800)]
selftests/bpf: Test syscall command parsing
Add building of the bpf(2) syscall commands documentation as part of the
docs building step in the build. This allows us to pick up on potential
parse errors from the docs generator script as part of selftests.
The generated manual pages here are not intended for distribution, they
are just a fragment that can be integrated into the other static text of
bpf(2) to form the full manual page.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-14-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:44 +0000 (09:19 -0800)]
selftests/bpf: Templatize man page generation
Previously, the Makefile here was only targeting a single manual page so
it just hardcoded a bunch of individual rules to specifically handle
build, clean, install, uninstall for that particular page.
Upcoming commits will generate manual pages for an additional section,
so this commit prepares the makefile first by converting the existing
targets into an evaluated set of targets based on the manual page name
and section.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-13-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:43 +0000 (09:19 -0800)]
tools/bpf: Remove bpf-helpers from bpftool docs
This logic is used for validating the manual pages from selftests, so
move the infra under tools/testing/selftests/bpf/ and rely on selftests
for validation rather than tying it into the bpftool build.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-12-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:42 +0000 (09:19 -0800)]
scripts/bpf: Add syscall commands printer
Add a new target to bpf_doc.py to support generating the list of syscall
commands directly from the UAPI headers. Assuming that developer
submissions keep the main header up to date, this should allow the man
pages to be automatically generated based on the latest API changes
rather than requiring someone to separately go back through the API and
describe each command.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-11-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:41 +0000 (09:19 -0800)]
scripts/bpf: Abstract eBPF API target parameter
Abstract out the target parameter so that upcoming commits, more than
just the existing "helpers" target can be called to generate specific
portions of docs from the eBPF UAPI headers.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-10-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:40 +0000 (09:19 -0800)]
bpf: Document BPF_MAP_*_BATCH syscall commands
Based roughly on the following commits:
* Commit
cb4d03ab499d ("bpf: Add generic support for lookup batch op")
* Commit
057996380a42 ("bpf: Add batch ops to all htab bpf map")
* Commit
aa2e93b8e58e ("bpf: Add generic support for update and delete
batch ops")
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Brian Vazquez <brianvv@google.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-9-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:39 +0000 (09:19 -0800)]
bpf: Document BPF_PROG_QUERY syscall command
Commit
468e2f64d220 ("bpf: introduce BPF_PROG_QUERY command") originally
introduced this, but there have been several additions since then.
Unlike BPF_PROG_ATTACH, it appears that the sockmap progs are not able
to be queried so far.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-8-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:38 +0000 (09:19 -0800)]
bpf: Document BPF_PROG_TEST_RUN syscall command
Based on a brief read of the corresponding source code.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-7-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:37 +0000 (09:19 -0800)]
bpf: Document BPF_PROG_ATTACH syscall command
Document the prog attach command in more detail, based on git commits:
* commit
f4324551489e ("bpf: add BPF_PROG_ATTACH and BPF_PROG_DETACH
commands")
* commit
4f738adba30a ("bpf: create tcp_bpf_ulp allowing BPF to monitor
socket TX/RX data")
* commit
f4364dcfc86d ("media: rc: introduce BPF_PROG_LIRC_MODE2")
* commit
d58e468b1112 ("flow_dissector: implements flow dissector BPF
hook")
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-6-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:36 +0000 (09:19 -0800)]
bpf: Document BPF_PROG_PIN syscall command
Commit
b2197755b263 ("bpf: add support for persistent maps/progs")
contains the original implementation and git logs, used as reference for
this documentation.
Also pull in the filename restriction as documented in commit
6d8cb045cde6
("bpf: comment why dots in filenames under BPF virtual FS are not allowed")
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-5-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:35 +0000 (09:19 -0800)]
bpf: Document BPF_F_LOCK in syscall commands
Document the meaning of the BPF_F_LOCK flag for the map lookup/update
descriptions. Based on commit
96049f3afd50 ("bpf: introduce BPF_F_LOCK
flag").
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-4-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:34 +0000 (09:19 -0800)]
bpf: Add minimal bpf() command documentation
Introduce high-level descriptions of the intent and return codes of the
bpf() syscall commands. Subsequent patches may further flesh out the
content to provide a more useful programming reference.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-3-joe@cilium.io
Joe Stringer [Tue, 2 Mar 2021 17:19:33 +0000 (09:19 -0800)]
bpf: Import syscall arg documentation
These descriptions are present in the man-pages project from the
original submissions around 2015-2016. Import them so that they can be
kept up to date as developers extend the bpf syscall commands.
These descriptions follow the pattern used by scripts/bpf_helpers_doc.py
so that we can take advantage of the parser to generate more up-to-date
man page writing based upon these headers.
Some minor wording adjustments were made to make the descriptions
more consistent for the description / return format.
Signed-off-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210302171947.2268128-2-joe@cilium.io
Co-authored-by: Alexei Starovoitov <ast@kernel.org>
Co-authored-by: Michael Kerrisk <mtk.manpages@gmail.com>
Alexei Starovoitov [Fri, 5 Mar 2021 01:58:16 +0000 (17:58 -0800)]
Merge branch 'Add BTF_KIND_FLOAT support'
Ilya Leoshkevich says:
====================
Some BPF programs compiled on s390 fail to load, because s390
arch-specific linux headers contain float and double types.
Introduce support for such types by representing them using the new
BTF_KIND_FLOAT. This series deals with libbpf, bpftool, in-kernel BTF
parser as well as selftests and documentation.
There are also pahole and LLVM parts:
* https://github.com/iii-i/dwarves/commit/btf-kind-float-v2
* https://reviews.llvm.org/D83289
but they should go in after the libbpf part is integrated.
---
v0: https://lore.kernel.org/bpf/
20210210030317.78820-1-iii@linux.ibm.com/
v0 -> v1: Per Andrii's suggestion, remove the unnecessary trailing u32.
v1: https://lore.kernel.org/bpf/
20210216011216.3168-1-iii@linux.ibm.com/
v1 -> v2: John noticed that sanitization corrupts BTF, because new and
old sizes don't match. Per Yonghong's suggestion, use a
modifier type (which has the same size as the float type) as
a replacement.
Per Yonghong's suggestion, add size and alignment checks to
the kernel BTF parser.
v2: https://lore.kernel.org/bpf/
20210219022543.20893-1-iii@linux.ibm.com/
v2 -> v3: Based on Yonghong's suggestions: Use BTF_KIND_CONST instead of
BTF_KIND_TYPEDEF and make sure that the C code generated from
the sanitized BTF is well-formed; fix size calculation in
tests and use NAME_TBD everywhere; limit allowed sizes to 2,
4, 8, 12 and 16 (this should also fix m68k and nds32le
builds).
v3: https://lore.kernel.org/bpf/
20210220034959.27006-1-iii@linux.ibm.com/
v3 -> v4: More fixes for the Yonghong's findings: fix the outdated
comment in bpf_object__sanitize_btf() and add the error
handling there (I've decided to check uint_id and uchar_id
too in order to simplify debugging); add bpftool output
example; use div64_u64_rem() instead of % in order to fix the
linker error.
Also fix the "invalid BTF_INFO" test (new commit, #4).
v4: https://lore.kernel.org/bpf/
20210222214917.83629-1-iii@linux.ibm.com/
v4 -> v5: Fixes for the Andrii's findings: Use BTF_KIND_STRUCT instead
of BTF_KIND_TYPEDEF for sanitization; check byte_sz in
libbpf; move btf__add_float; remove relo support; add a dedup
test (new commit, #7).
v5: https://lore.kernel.org/bpf/
20210223231459.99664-1-iii@linux.ibm.com/
v5 -> v6: Fixes for further findings by Andrii: split whitespace issue
fix into a separate patch; add 12-byte float to "float test #1,
well-formed".
v6: https://lore.kernel.org/bpf/
20210224234535.106970-1-iii@linux.ibm.com/
v6 -> v7: John suggested to add a comment explaining why sanitization
does not preserve the type name, as well as what effect it has
on running the code on the older kernels.
Yonghong has asked to add a comment explaining why we are not
checking the alignment very precisely in the kernel.
John suggested to add a bpf_core_field_size test (commit #9).
Based on Alexei's feedback [1] I'm proceeding with the BTF_KIND_FLOAT
approach.
[1] https://lore.kernel.org/bpf/CAADnVQKWPODWZ2RSJ5FJhfYpxkuV0cvSAL1O+FSr9oP1ercoBg@mail.gmail.com/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:56 +0000 (21:22 +0100)]
bpf: Document BTF_KIND_FLOAT in btf.rst
Also document the expansion of the kind bitfield.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-11-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:54 +0000 (21:22 +0100)]
selftests/bpf: Add BTF_KIND_FLOAT to the existing deduplication tests
Check that floats don't interfere with struct deduplication, that they
are not merged with another kinds and that floats of different sizes are
not merged with each other.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226202256.116518-9-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:53 +0000 (21:22 +0100)]
selftest/bpf: Add BTF_KIND_FLOAT tests
Test the good variants as well as the potential malformed ones.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226202256.116518-8-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:52 +0000 (21:22 +0100)]
bpf: Add BTF_KIND_FLOAT support
On the kernel side, introduce a new btf_kind_operations. It is
similar to that of BTF_KIND_INT, however, it does not need to
handle encodings and bit offsets. Do not implement printing, since
the kernel does not know how to format floating-point values.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-7-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:51 +0000 (21:22 +0100)]
selftests/bpf: Use the 25th bit in the "invalid BTF_INFO" test
The bit being checked by this test is no longer reserved after
introducing BTF_KIND_FLOAT, so use the next one instead.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-6-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:50 +0000 (21:22 +0100)]
tools/bpftool: Add BTF_KIND_FLOAT support
Only dumping support needs to be adjusted, the code structure follows
that of BTF_KIND_INT. Example plain and JSON outputs:
[4] FLOAT 'float' size=4
{"id":4,"kind":"FLOAT","name":"float","size":4}
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-5-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:49 +0000 (21:22 +0100)]
libbpf: Add BTF_KIND_FLOAT support
The logic follows that of BTF_KIND_INT most of the time. Sanitization
replaces BTF_KIND_FLOATs with equally-sized empty BTF_KIND_STRUCTs on
older kernels, for example, the following:
[4] FLOAT 'float' size=4
becomes the following:
[4] STRUCT '(anon)' size=4 vlen=0
With dwarves patch [1] and this patch, the older kernels, which were
failing with the floating-point-related errors, will now start working
correctly.
[1] https://github.com/iii-i/dwarves/commit/btf-kind-float-v2
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226202256.116518-4-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:48 +0000 (21:22 +0100)]
libbpf: Fix whitespace in btf_add_composite() comment
Remove trailing space.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-3-iii@linux.ibm.com
Ilya Leoshkevich [Fri, 26 Feb 2021 20:22:47 +0000 (21:22 +0100)]
bpf: Add BTF_KIND_FLOAT to uapi
Add a new kind value and expand the kind bitfield.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210226202256.116518-2-iii@linux.ibm.com
Yonghong Song [Fri, 26 Feb 2021 22:38:10 +0000 (14:38 -0800)]
selftests/bpf: Add a verifier scale test with unknown bounded loop
The original bcc pull request https://github.com/iovisor/bcc/pull/3270 exposed
a verifier failure with Clang 12/13 while Clang 4 works fine.
Further investigation exposed two issues:
Issue 1: LLVM may generate code which uses less refined value. The issue is
fixed in LLVM patch: https://reviews.llvm.org/D97479
Issue 2: Spills with initial value 0 are marked as precise which makes later
state pruning less effective. This is my rough initial analysis and
further investigation is needed to find how to improve verifier
pruning in such cases.
With the above LLVM patch, for the new loop6.c test, which has smaller loop
bound compared to original test, I got:
$ test_progs -s -n 10/16
...
stack depth 64
processed 390735 insns (limit 1000000) max_states_per_insn 87
total_states 8658 peak_states 964 mark_read 6
#10/16 loop6.o:OK
Use the original loop bound, i.e., commenting out "#define WORKAROUND", I got:
$ test_progs -s -n 10/16
...
BPF program is too large. Processed 1000001 insn
stack depth 64
processed 1000001 insns (limit 1000000) max_states_per_insn 91
total_states 23176 peak_states 5069 mark_read 6
...
#10/16 loop6.o:FAIL
The purpose of this patch is to provide a regression test for the above LLVM fix
and also provide a test case for further analyzing the verifier pruning issue.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Zhenwei Pi <pizhenwei@bytedance.com>
Link: https://lore.kernel.org/bpf/20210226223810.236472-1-yhs@fb.com
Cong Wang [Mon, 1 Mar 2021 18:48:05 +0000 (10:48 -0800)]
skmsg: Add function doc for skb->_sk_redir
This should fix the following warning:
include/linux/skbuff.h:932: warning: Function parameter or member
'_sk_redir' not described in 'sk_buff'
Reported-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Lorenz Bauer <lmb@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210301184805.8174-1-xiyou.wangcong@gmail.com
Andrii Nakryiko [Wed, 3 Mar 2021 00:40:10 +0000 (16:40 -0800)]
tools/runqslower: Allow substituting custom vmlinux.h for the build
Just like was done for bpftool and selftests in
ec23eb705620 ("tools/bpftool:
Allow substituting custom vmlinux.h for the build") and
ca4db6389d61
("selftests/bpf: Allow substituting custom vmlinux.h for selftests build"),
allow to provide pre-generated vmlinux.h for runqslower build.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210303004010.653954-1-andrii@kernel.org
Ian Denhardt [Wed, 24 Feb 2021 02:24:00 +0000 (21:24 -0500)]
tools, bpf_asm: Exit non-zero on errors
... so callers can correctly detect failure.
Signed-off-by: Ian Denhardt <ian@zenhack.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/b0bea780bc292f29e7b389dd062f20adc2a2d634.1614201868.git.ian@zenhack.net
Ian Denhardt [Wed, 24 Feb 2021 02:15:31 +0000 (21:15 -0500)]
tools, bpf_asm: Hard error on out of range jumps
Per discussion at [0] this was originally introduced as a warning due
to concerns about breaking existing code, but a hard error probably
makes more sense, especially given that concerns about breakage were
only speculation.
[0] https://lore.kernel.org/bpf/
c964892195a6b91d20a67691448567ef528ffa6d.camel@linux.ibm.com/T/#t
Signed-off-by: Ian Denhardt <ian@zenhack.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Ilya Leoshkevich <iii@linux.ibm.com>
Link: https://lore.kernel.org/bpf/a6b6c7516f5d559049d669968e953b4a8d7adea3.1614201868.git.ian@zenhack.net
Alexei Starovoitov [Fri, 26 Feb 2021 20:55:43 +0000 (12:55 -0800)]
Merge branch 'bpf: add bpf_for_each_map_elem() helper'
Yonghong Song says:
====================
This patch set introduced bpf_for_each_map_elem() helper.
The helper permits bpf program iterates through all elements
for a particular map.
The work originally inspired by an internal discussion where
firewall rules are kept in a map and bpf prog wants to
check packet 5 tuples against all rules in the map.
A bounded loop can be used but it has a few drawbacks.
As the loop iteration goes up, verification time goes up too.
For really large maps, verification may fail.
A helper which abstracts out the loop itself will not have
verification time issue.
A recent discussion in [1] involves to iterate all hash map
elements in bpf program. Currently iterating all hashmap elements
in bpf program is not easy if key space is really big.
Having a helper to abstract out the loop itself is even more
meaningful.
The proposed helper signature looks like:
long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags)
where callback_fn is a static function and callback_ctx is
a piece of data allocated on the caller stack which can be
accessed by the callback_fn. The callback_fn signature might be
different for different maps. For example, for hash/array maps,
the signature is
long callback_fn(map, key, val, callback_ctx)
In the rest of series, Patches 1/2/3/4 did some refactoring. Patch 5
implemented core kernel support for the helper. Patches 6 and 7
added hashmap and arraymap support. Patches 8/9 added libbpf
support. Patch 10 added bpftool support. Patches 11 and 12 added
selftests for hashmap and arraymap.
[1]: https://lore.kernel.org/bpf/
20210122205415.113822-1-xiyou.wangcong@gmail.com/
Changelogs:
v4 -> v5:
- rebase on top of bpf-next.
v3 -> v4:
- better refactoring of check_func_call(), calculate subprogno outside
of __check_func_call() helper. (Andrii)
- better documentation (like the list of supported maps and their
callback signatures) in uapi header. (Andrii)
- implement and use ASSERT_LT in selftests. (Andrii)
- a few other minor changes.
v2 -> v3:
- add comments in retrieve_ptr_limit(), which is in sanitize_ptr_alu(),
to clarify the code is not executed for PTR_TO_MAP_KEY handling,
but code is manually tested. (Alexei)
- require BTF for callback function. (Alexei)
- simplify hashmap/arraymap callback return handling as return value
[0, 1] has been enforced by the verifier. (Alexei)
- also mark global subprog (if used in ld_imm64) as RELO_SUBPROG_ADDR. (Andrii)
- handle the condition to mark RELO_SUBPROG_ADDR properly. (Andrii)
- make bpftool subprog insn offset dumping consist with pcrel calls. (Andrii)
v1 -> v2:
- setup callee frame in check_helper_call() and then proceed to verify
helper return value as normal (Alexei)
- use meta data to keep track of map/func pointer to avoid hard coding
the register number (Alexei)
- verify callback_fn return value range [0, 1]. (Alexei)
- add migrate_{disable, enable} to ensure percpu value is the one
bpf program expects to see. (Alexei)
- change bpf_for_each_map_elem() return value to the number of iterated
elements. (Andrii)
- Change libbpf pseudo_func relo name to RELO_SUBPROG_ADDR and use
more rigid checking for the relocation. (Andrii)
- Better format to print out subprog address with bpftool. (Andrii)
- Use bpf_prog_test_run to trigger bpf run, instead of bpf_iter. (Andrii)
- Other misc changes.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Yonghong Song [Fri, 26 Feb 2021 20:49:34 +0000 (12:49 -0800)]
selftests/bpf: Add arraymap test for bpf_for_each_map_elem() helper
A test is added for arraymap and percpu arraymap. The test also
exercises the early return for the helper which does not
traverse all elements.
$ ./test_progs -n 45
#45/1 hash_map:OK
#45/2 array_map:OK
#45 for_each:OK
Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204934.3885756-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:33 +0000 (12:49 -0800)]
selftests/bpf: Add hashmap test for bpf_for_each_map_elem() helper
A test case is added for hashmap and percpu hashmap. The test
also exercises nested bpf_for_each_map_elem() calls like
bpf_prog:
bpf_for_each_map_elem(func1)
func1:
bpf_for_each_map_elem(func2)
func2:
$ ./test_progs -n 45
#45/1 hash_map:OK
#45 for_each:OK
Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204933.3885657-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:31 +0000 (12:49 -0800)]
bpftool: Print subprog address properly
With later hashmap example, using bpftool xlated output may
look like:
int dump_task(struct bpf_iter__task * ctx):
; struct task_struct *task = ctx->task;
0: (79) r2 = *(u64 *)(r1 +8)
; if (task == (void *)0 || called > 0)
...
19: (18) r2 = subprog[+17]
30: (18) r2 = subprog[+25]
...
36: (95) exit
__u64 check_hash_elem(struct bpf_map * map, __u32 * key, __u64 * val,
struct callback_ctx * data):
; struct bpf_iter__task *ctx = data->ctx;
37: (79) r5 = *(u64 *)(r4 +0)
...
55: (95) exit
__u64 check_percpu_elem(struct bpf_map * map, __u32 * key,
__u64 * val, void * unused):
; check_percpu_elem(struct bpf_map *map, __u32 *key, __u64 *val, void *unused)
56: (bf) r6 = r3
...
83: (18) r2 = subprog[-47]
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204931.3885458-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:30 +0000 (12:49 -0800)]
libbpf: Support subprog address relocation
A new relocation RELO_SUBPROG_ADDR is added to capture
subprog addresses loaded with ld_imm64 insns. Such ld_imm64
insns are marked with BPF_PSEUDO_FUNC and will be passed to
kernel. For bpf_for_each_map_elem() case, kernel will
check that the to-be-used subprog address must be a static
function and replace it with proper actual jited func address.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204930.3885367-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:29 +0000 (12:49 -0800)]
libbpf: Move function is_ldimm64() earlier in libbpf.c
Move function is_ldimm64() close to the beginning of libbpf.c,
so it can be reused by later code and the next patch as well.
There is no functionality change.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204929.3885295-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:28 +0000 (12:49 -0800)]
bpf: Add arraymap support for bpf_for_each_map_elem() helper
This patch added support for arraymap and percpu arraymap.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204928.3885192-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:27 +0000 (12:49 -0800)]
bpf: Add hashtab support for bpf_for_each_map_elem() helper
This patch added support for hashmap, percpu hashmap,
lru hashmap and percpu lru hashmap.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204927.3885020-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:25 +0000 (12:49 -0800)]
bpf: Add bpf_for_each_map_elem() helper
The bpf_for_each_map_elem() helper is introduced which
iterates all map elements with a callback function. The
helper signature looks like
long bpf_for_each_map_elem(map, callback_fn, callback_ctx, flags)
and for each map element, the callback_fn will be called. For example,
like hashmap, the callback signature may look like
long callback_fn(map, key, val, callback_ctx)
There are two known use cases for this. One is from upstream ([1]) where
a for_each_map_elem helper may help implement a timeout mechanism
in a more generic way. Another is from our internal discussion
for a firewall use case where a map contains all the rules. The packet
data can be compared to all these rules to decide allow or deny
the packet.
For array maps, users can already use a bounded loop to traverse
elements. Using this helper can avoid using bounded loop. For other
type of maps (e.g., hash maps) where bounded loop is hard or
impossible to use, this helper provides a convenient way to
operate on all elements.
For callback_fn, besides map and map element, a callback_ctx,
allocated on caller stack, is also passed to the callback
function. This callback_ctx argument can provide additional
input and allow to write to caller stack for output.
If the callback_fn returns 0, the helper will iterate through next
element if available. If the callback_fn returns 1, the helper
will stop iterating and returns to the bpf program. Other return
values are not used for now.
Currently, this helper is only available with jit. It is possible
to make it work with interpreter with so effort but I leave it
as the future work.
[1]: https://lore.kernel.org/bpf/
20210122205415.113822-1-xiyou.wangcong@gmail.com/
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204925.3884923-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:24 +0000 (12:49 -0800)]
bpf: Change return value of verifier function add_subprog()
Currently, verifier function add_subprog() returns 0 for success
and negative value for failure. Change the return value
to be the subprog number for success. This functionality will be
used in the next patch to save a call to find_subprog().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204924.3884848-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:23 +0000 (12:49 -0800)]
bpf: Refactor check_func_call() to allow callback function
Later proposed bpf_for_each_map_elem() helper has callback
function as one of its arguments. This patch refactored
check_func_call() to permit callback function which sets
callee state. Different callback functions may have
different callee states.
There is no functionality change for this patch.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204923.3884627-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:22 +0000 (12:49 -0800)]
bpf: Factor out verbose_invalid_scalar()
Factor out the function verbose_invalid_scalar() to verbose
print if a scalar is not in a tnum range. There is no
functionality change and the function will be used by
later patch which introduced bpf_for_each_map_elem().
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204922.3884375-1-yhs@fb.com
Yonghong Song [Fri, 26 Feb 2021 20:49:20 +0000 (12:49 -0800)]
bpf: Factor out visit_func_call_insn() in check_cfg()
During verifier check_cfg(), all instructions are
visited to ensure verifier can handle program control flows.
This patch factored out function visit_func_call_insn()
so it can be reused in later patch to visit callback function
calls. There is no functionality change for this patch.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210226204920.3884136-1-yhs@fb.com
Ilya Leoshkevich [Wed, 24 Feb 2021 11:14:45 +0000 (12:14 +0100)]
selftests/bpf: Copy extras in out-of-srctree builds
Building selftests in a separate directory like this:
make O="$BUILD" -C tools/testing/selftests/bpf
and then running:
cd "$BUILD" && ./test_progs -t btf
causes all the non-flavored btf_dump_test_case_*.c tests to fail,
because these files are not copied to where test_progs expects to find
them.
Fix by not skipping EXT-COPY when the original $(OUTPUT) is not empty
(lib.mk sets it to $(shell pwd) in that case) and using rsync instead
of cp: cp fails because e.g. urandom_read is being copied into itself,
and rsync simply skips such cases. rsync is already used by kselftests
and therefore is not a new dependency.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210224111445.102342-1-iii@linux.ibm.com
KP Singh [Thu, 25 Feb 2021 16:19:47 +0000 (16:19 +0000)]
selftests/bpf: Propagate error code of the command to vmtest.sh
When vmtest.sh ran a command in a VM, it did not record or propagate the
error code of the command. This made the script less "script-able". The
script now saves the error code of the said command in a file in the VM,
copies the file back to the host and (when available) uses this error
code instead of its own.
Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210225161947.1778590-1-kpsingh@kernel.org
Alexei Starovoitov [Fri, 26 Feb 2021 20:28:04 +0000 (12:28 -0800)]
Merge branch 'sock_map: clean up and refactor code for BPF_SK_SKB_VERDICT'
Cong Wang says:
====================
From: Cong Wang <cong.wang@bytedance.com>
This patchset is the first series of patches separated out from
the original large patchset, to make reviews easier. This patchset
does not add any new feature or change any functionality but merely
cleans up the existing sockmap and skmsg code and refactors it, to
prepare for the patches followed up. This passed all BPF selftests.
To see the big picture, the original whole patchset is available
on github: https://github.com/congwang/linux/tree/sockmap
and this patchset is also available on github:
https://github.com/congwang/linux/tree/sockmap1
---
v7: add 1 trivial cleanup patch
define a mask for sk_redir
fix CONFIG_BPF_SYSCALL in include/net/udp.h
make sk_psock_done_strp() static
move skb_bpf_redirect_clear() to sk_psock_backlog()
v6: fix !CONFIG_INET case
v5: improve CONFIG_BPF_SYSCALL dependency
add 3 trivial cleanup patches
v4: reuse skb dst instead of skb ext
fix another Kconfig error
v3: fix a few Kconfig compile errors
remove an unused variable
add a comment for bpf_convert_data_end_access()
v2: split the original patchset
compute data_end with bpf_convert_data_end_access()
get rid of psock->bpf_running
reduce the scope of CONFIG_BPF_STREAM_PARSER
do not add CONFIG_BPF_SOCK_MAP
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cong Wang [Tue, 23 Feb 2021 18:49:34 +0000 (10:49 -0800)]
skmsg: Remove unused sk_psock_stop() declaration
It is not defined or used anywhere.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-10-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:33 +0000 (10:49 -0800)]
skmsg: Get rid of sk_psock_bpf_run()
It is now nearly identical to bpf_prog_run_pin_on_cpu() and
it has an unused parameter 'psock', so we can just get rid
of it and call bpf_prog_run_pin_on_cpu() directly.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-9-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:32 +0000 (10:49 -0800)]
skmsg: Make __sk_psock_purge_ingress_msg() static
It is only used within skmsg.c so can become static.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-8-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:31 +0000 (10:49 -0800)]
sock_map: Make sock_map_prog_update() static
It is only used within sock_map.c so can become static.
Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-7-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:30 +0000 (10:49 -0800)]
sock_map: Rename skb_parser and skb_verdict
These two eBPF programs are tied to BPF_SK_SKB_STREAM_PARSER
and BPF_SK_SKB_STREAM_VERDICT, rename them to reflect the fact
they are only used for TCP. And save the name 'skb_verdict' for
general use later.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Lorenz Bauer <lmb@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-6-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:29 +0000 (10:49 -0800)]
skmsg: Move sk_redir from TCP_SKB_CB to skb
Currently TCP_SKB_CB() is hard-coded in skmsg code, it certainly
does not work for any other non-TCP protocols. We can move them to
skb ext, but it introduces a memory allocation on fast path.
Fortunately, we only need to a word-size to store all the information,
because the flags actually only contains 1 bit so can be just packed
into the lowest bit of the "pointer", which is stored as unsigned
long.
Inside struct sk_buff, '_skb_refdst' can be reused because skb dst is
no longer needed after ->sk_data_ready() so we can just drop it.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-5-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:28 +0000 (10:49 -0800)]
bpf: Compute data_end dynamically with JIT code
Currently, we compute ->data_end with a compile-time constant
offset of skb. But as Jakub pointed out, we can actually compute
it in eBPF JIT code at run-time, so that we can competely get
rid of ->data_end. This is similar to skb_shinfo(skb) computation
in bpf_convert_shinfo_access().
Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-4-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:27 +0000 (10:49 -0800)]
skmsg: Get rid of struct sk_psock_parser
struct sk_psock_parser is embedded in sk_psock, it is
unnecessary as skb verdict also uses ->saved_data_ready.
We can simply fold these fields into sk_psock, and get rid
of ->enabled.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-3-xiyou.wangcong@gmail.com
Cong Wang [Tue, 23 Feb 2021 18:49:26 +0000 (10:49 -0800)]
bpf: Clean up sockmap related Kconfigs
As suggested by John, clean up sockmap related Kconfigs:
Reduce the scope of CONFIG_BPF_STREAM_PARSER down to TCP stream
parser, to reflect its name.
Make the rest sockmap code simply depend on CONFIG_BPF_SYSCALL
and CONFIG_INET, the latter is still needed at this point because
of TCP/UDP proto update. And leave CONFIG_NET_SOCK_MSG untouched,
as it is used by non-sockmap cases.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Lorenz Bauer <lmb@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Jakub Sitnicki <jakub@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210223184934.6054-2-xiyou.wangcong@gmail.com
Hangbin Liu [Tue, 23 Feb 2021 13:14:57 +0000 (21:14 +0800)]
bpf: Remove blank line in bpf helper description comment
Commit
34b2021cc616 ("bpf: Add BPF-helper for MTU checking") added an extra
blank line in bpf helper description. This will make bpf_helpers_doc.py stop
building bpf_helper_defs.h immediately after bpf_check_mtu(), which will
affect future added functions.
Fixes:
34b2021cc616 ("bpf: Add BPF-helper for MTU checking")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20210223131457.1378978-1-liuhangbin@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Fri, 26 Feb 2021 20:08:49 +0000 (12:08 -0800)]
Merge branch 'selftests/bpf: xsk improvements and new stats'
Ciara Loftus says:
====================
This series attempts to improve the xsk selftest framework by:
1. making the default output less verbose
2. adding an optional verbose flag to both the test_xsk.sh script and xdpxceiver app.
3. renaming the debug option in the app to to 'dump-pkts' and add a flag to the test_xsk.sh
script which enables the flag in the app.
4. changing how tests are launched - now they are launched from the xdpxceiver app
instead of the script.
Once the improvements are made, a new set of tests are added which test the xsk
statistics.
The output of the test script now looks like:
./test_xsk.sh
PREREQUISITES: [ PASS ]
1..10
ok 1 PASS: SKB NOPOLL
ok 2 PASS: SKB POLL
ok 3 PASS: SKB NOPOLL Socket Teardown
ok 4 PASS: SKB NOPOLL Bi-directional Sockets
ok 5 PASS: SKB NOPOLL Stats
ok 6 PASS: DRV NOPOLL
ok 7 PASS: DRV POLL
ok 8 PASS: DRV NOPOLL Socket Teardown
ok 9 PASS: DRV NOPOLL Bi-directional Sockets
ok 10 PASS: DRV NOPOLL Stats
# Totals: pass:10 fail:0 xfail:0 xpass:0 skip:0 error:0
XSK KSELFTESTS: [ PASS ]
v2->v3:
* Rename dump-pkts to dump_pkts in test_xsk.sh
* Add examples of flag usage to test_xsk.sh
v1->v2:
* Changed '-d' flag in the shell script to '-D' to be consistent with the xdpxceiver app.
* Renamed debug mode to 'dump-pkts' which better reflects the behaviour.
* Use libpf APIs instead of calls to ss for configuring xdp on the links
* Remove mutex init & destroy for each stats test
* Added a description for each of the new statistics tests
* Distinguish between exiting due to initialisation failure vs test failure
This series applies on commit
d310ec03a34e92a77302edb804f7d68ee4f01ba0
Ciara Loftus (3):
selftests/bpf: expose and rename debug argument
selftests/bpf: restructure xsk selftests
selftests/bpf: introduce xsk statistics tests
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Ciara Loftus [Tue, 23 Feb 2021 16:23:04 +0000 (16:23 +0000)]
selftests/bpf: Introduce xsk statistics tests
This commit introduces a range of tests to the xsk testsuite
for validating xsk statistics.
A new test type called 'stats' is added. Within it there are
four sub-tests. Each test configures a scenario which should
trigger the given error statistic. The test passes if the statistic
is successfully incremented.
The four statistics for which tests have been created are:
1. rx dropped
Increase the UMEM frame headroom to a value which results in
insufficient space in the rx buffer for both the packet and the headroom.
2. tx invalid
Set the 'len' field of tx descriptors to an invalid value (umem frame
size + 1).
3. rx ring full
Reduce the size of the RX ring to a fraction of the fill ring size.
4. fill queue empty
Do not populate the fill queue and then try to receive pkts.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210223162304.7450-5-ciara.loftus@intel.com
Ciara Loftus [Tue, 23 Feb 2021 16:23:03 +0000 (16:23 +0000)]
selftests/bpf: Restructure xsk selftests
Prior to this commit individual xsk tests were launched from the
shell script 'test_xsk.sh'. When adding a new test type, two new test
configurations had to be added to this file - one for each of the
supported XDP 'modes' (skb or drv). Should zero copy support be added to
the xsk selftest framework in the future, three new test configurations
would need to be added for each new test type. Each new test type also
typically requires new CLI arguments for the xdpxceiver program.
This commit aims to reduce the overhead of adding new tests, by launching
the test configurations from within the xdpxceiver program itself, using
simple loops. Every test is run every time the C program is executed. Many
of the CLI arguments can be removed as a result.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210223162304.7450-4-ciara.loftus@intel.com
Ciara Loftus [Tue, 23 Feb 2021 16:23:02 +0000 (16:23 +0000)]
selftests/bpf: Expose and rename debug argument
Launching xdpxceiver with -D enables what was formerly know as 'debug'
mode. Rename this mode to 'dump-pkts' as it better describes the
behavior enabled by the option. New usage:
./xdpxceiver .. -D
or
./xdpxceiver .. --dump-pkts
Also make it possible to pass this flag to the app via the test_xsk.sh
shell script like so:
./test_xsk.sh -D
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210223162304.7450-3-ciara.loftus@intel.com
Magnus Karlsson [Tue, 23 Feb 2021 16:23:01 +0000 (16:23 +0000)]
selftest/bpf: Make xsk tests less verbose
Make the xsk tests less verbose by only printing the
essentials. Currently, it is hard to see if the tests passed or not
due to all the printouts. Move the extra printouts to a verbose
option, if further debugging is needed when a problem arises.
To run the xsk tests with verbose output:
./test_xsk.sh -v
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210223162304.7450-2-ciara.loftus@intel.com
Brendan Jackman [Wed, 17 Feb 2021 10:45:09 +0000 (10:45 +0000)]
bpf: Rename fixup_bpf_calls and add some comments
This function has become overloaded, it actually does lots of diverse
things in a single pass. Rename it to avoid confusion, and add some
concise commentary.
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210217104509.2423183-1-jackmanb@google.com
Dmitrii Banshchikov [Thu, 25 Feb 2021 20:26:29 +0000 (00:26 +0400)]
bpf: Use MAX_BPF_FUNC_REG_ARGS macro
Instead of using integer literal here and there use macro name for
better context.
Signed-off-by: Dmitrii Banshchikov <me@ubique.spb.ru>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210225202629.585485-1-me@ubique.spb.ru
Alexei Starovoitov [Fri, 26 Feb 2021 19:51:48 +0000 (11:51 -0800)]
Merge branch 'bpf: enable task local storage for tracing'
Song Liu says:
====================
This set enables task local storage for non-BPF_LSM programs.
It is common for tracing BPF program to access per-task data. Currently,
these data are stored in hash tables with pid as the key. In
bcc/libbpftools [1], 9 out of 23 tools use such hash tables. However,
hash table is not ideal for many use case. Task local storage provides
better usability and performance for BPF programs. Please refer to 6/6 for
some performance comparison of task local storage vs. hash table.
Changes v5 => v6:
1. Add inc/dec bpf_task_storage_busy in bpf_local_storage_map_free().
Changes v4 => v5:
1. Fix build w/o CONFIG_NET. (kernel test robot)
2. Remove unnecessary check for !task_storage_ptr(). (Martin)
3. Small changes in commit logs.
Changes v3 => v4:
1. Prevent deadlock from recursive calls of bpf_task_storage_[get|delete].
(2/6 checks potential deadlock and fails over, 4/6 adds a selftest).
Changes v2 => v3:
1. Make the selftest more robust. (Andrii)
2. Small changes with runqslower. (Andrii)
3. Shortern CC list to make it easy for vger.
Changes v1 => v2:
1. Do not allocate task local storage when the task is being freed.
2. Revise the selftest and added a new test for a task being freed.
3. Minor changes in runqslower.
====================
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Song Liu [Thu, 25 Feb 2021 23:43:19 +0000 (15:43 -0800)]
bpf: runqslower: Use task local storage
Replace hashtab with task local storage in runqslower. This improves the
performance of these BPF programs. The following table summarizes average
runtime of these programs, in nanoseconds:
task-local hash-prealloc hash-no-prealloc
handle__sched_wakeup 125 340 3124
handle__sched_wakeup_new 2812 1510 2998
handle__sched_switch 151 208 991
Note that, task local storage gives better performance than hashtab for
handle__sched_wakeup and handle__sched_switch. On the other hand, for
handle__sched_wakeup_new, task local storage is slower than hashtab with
prealloc. This is because handle__sched_wakeup_new accesses the data for
the first time, so it has to allocate the data for task local storage.
Once the initial allocation is done, subsequent accesses, as those in
handle__sched_wakeup, are much faster with task local storage. If we
disable hashtab prealloc, task local storage is much faster for all 3
functions.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210225234319.336131-7-songliubraving@fb.com
Song Liu [Thu, 25 Feb 2021 23:43:18 +0000 (15:43 -0800)]
bpf: runqslower: Prefer using local vmlimux to generate vmlinux.h
Update the Makefile to prefer using $(O)/vmlinux, $(KBUILD_OUTPUT)/vmlinux
(for selftests) or ../../../vmlinux. These two files should have latest
definitions for vmlinux.h.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210225234319.336131-6-songliubraving@fb.com
Song Liu [Thu, 25 Feb 2021 23:43:17 +0000 (15:43 -0800)]
selftests/bpf: Test deadlock from recursive bpf_task_storage_[get|delete]
Add a test with recursive bpf_task_storage_[get|delete] from fentry
programs on bpf_local_storage_lookup and bpf_local_storage_update. Without
proper deadlock prevent mechanism, this test would cause deadlock.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210225234319.336131-5-songliubraving@fb.com
Song Liu [Thu, 25 Feb 2021 23:43:16 +0000 (15:43 -0800)]
selftests/bpf: Add non-BPF_LSM test for task local storage
Task local storage is enabled for tracing programs. Add two tests for
task local storage without CONFIG_BPF_LSM.
The first test stores a value in sys_enter and read it back in sys_exit.
The second test checks whether the kernel allows allocating task local
storage in exit_creds() (which it should not).
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210225234319.336131-4-songliubraving@fb.com
Song Liu [Thu, 25 Feb 2021 23:43:15 +0000 (15:43 -0800)]
bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]
BPF helpers bpf_task_storage_[get|delete] could hold two locks:
bpf_local_storage_map_bucket->lock and bpf_local_storage->lock. Calling
these helpers from fentry/fexit programs on functions in bpf_*_storage.c
may cause deadlock on either locks.
Prevent such deadlock with a per cpu counter, bpf_task_storage_busy. We
need this counter to be global, because the two locks here belong to two
different objects: bpf_local_storage_map and bpf_local_storage. If we
pick one of them as the owner of the counter, it is still possible to
trigger deadlock on the other lock. For example, if bpf_local_storage_map
owns the counters, it cannot prevent deadlock on bpf_local_storage->lock
when two maps are used.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210225234319.336131-3-songliubraving@fb.com
Song Liu [Thu, 25 Feb 2021 23:43:14 +0000 (15:43 -0800)]
bpf: Enable task local storage for tracing programs
To access per-task data, BPF programs usually creates a hash table with
pid as the key. This is not ideal because:
1. The user need to estimate the proper size of the hash table, which may
be inaccurate;
2. Big hash tables are slow;
3. To clean up the data properly during task terminations, the user need
to write extra logic.
Task local storage overcomes these issues and offers a better option for
these per-task data. Task local storage is only available to BPF_LSM. Now
enable it for tracing programs.
Unlike LSM programs, tracing programs can be called in IRQ contexts.
Helpers that access task local storage are updated to use
raw_spin_lock_irqsave() instead of raw_spin_lock_bh().
Tracing programs can attach to functions on the task free path, e.g.
exit_creds(). To avoid allocating task local storage after
bpf_task_storage_free(). bpf_task_storage_get() is updated to not allocate
new storage when the task is not refcounted (task->usage == 0).
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210225234319.336131-2-songliubraving@fb.com
Xuan Zhuo [Thu, 18 Feb 2021 20:50:45 +0000 (20:50 +0000)]
xsk: Build skb by page (aka generic zerocopy xmit)
This patch is used to construct skb based on page to save memory copy
overhead.
This function is implemented based on IFF_TX_SKB_NO_LINEAR. Only the
network card priv_flags supports IFF_TX_SKB_NO_LINEAR will use page to
directly construct skb. If this feature is not supported, it is still
necessary to copy data to construct skb.
---------------- Performance Testing ------------
The test environment is Aliyun ECS server.
Test cmd:
```
xdpsock -i eth0 -t -S -s <msg size>
```
Test result data:
size 64 512 1024 1500
copy 1916747 1775988 1600203 1440054
page 1974058 1953655 1945463 1904478
percent 3.0% 10.0% 21.58% 32.3%
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Dust Li <dust.li@linux.alibaba.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-6-alobakin@pm.me
Alexander Lobakin [Thu, 18 Feb 2021 20:50:31 +0000 (20:50 +0000)]
xsk: Respect device's headroom and tailroom on generic xmit path
xsk_generic_xmit() allocates a new skb and then queues it for
xmitting. The size of new skb's headroom is desc->len, so it comes
to the driver/device with no reserved headroom and/or tailroom.
Lots of drivers need some headroom (and sometimes tailroom) to
prepend (and/or append) some headers or data, e.g. CPU tags,
device-specific headers/descriptors (LSO, TLS etc.), and if case
of no available space skb_cow_head() will reallocate the skb.
Reallocations are unwanted on fast-path, especially when it comes
to XDP, so generic XSK xmit should reserve the spaces declared in
dev->needed_headroom and dev->needed tailroom to avoid them.
Note on max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom)):
Usually, output functions reserve LL_RESERVED_SPACE(dev), which
consists of dev->hard_header_len + dev->needed_headroom, aligned
by 16.
However, on XSK xmit hard header is already here in the chunk, so
hard_header_len is not needed. But it'd still be better to align
data up to cacheline, while reserving no less than driver requests
for headroom. NET_SKB_PAD here is to double-insure there will be
no reallocations even when the driver advertises no needed_headroom,
but in fact need it (not so rare case).
Fixes:
35fcde7f8deb ("xsk: support for Tx")
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-5-alobakin@pm.me
Xuan Zhuo [Thu, 18 Feb 2021 20:50:17 +0000 (20:50 +0000)]
virtio-net: Support IFF_TX_SKB_NO_LINEAR flag
Virtio net supports the case where the skb linear space is empty, so
add priv_flags.
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-4-alobakin@pm.me
Xuan Zhuo [Thu, 18 Feb 2021 20:50:02 +0000 (20:50 +0000)]
net: Add priv_flags for allow tx skb without linear
In some cases, we hope to construct skb directly based on the existing
memory without copying data. In this case, the page will be placed
directly in the skb, and the linear space of skb is empty. But
unfortunately, many the network card does not support this operation.
For example Mellanox Technologies MT27710 Family [ConnectX-4 Lx] will
get the following error message:
mlx5_core 0000:3b:00.1 eth1: Error cqe on cqn 0x817, ci 0x8,
qn 0x1dbb, opcode 0xd, syndrome 0x1, vendor syndrome 0x68
00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00000030: 00 00 00 00 60 10 68 01 0a 00 1d bb 00 0f 9f d2
WQE DUMP: WQ size 1024 WQ cur size 0, WQE index 0xf, len: 64
00000000: 00 00 0f 0a 00 1d bb 03 00 00 00 08 00 00 00 00
00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00000020: 00 00 00 2b 00 08 00 00 00 00 00 05 9e e3 08 00
00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
mlx5_core 0000:3b:00.1 eth1: ERR CQE on SQ: 0x1dbb
So a priv_flag is added here to indicate whether the network card
supports this feature.
Suggested-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-3-alobakin@pm.me
Alexander Lobakin [Thu, 18 Feb 2021 20:49:41 +0000 (20:49 +0000)]
netdevice: Add missing IFF_PHONY_HEADROOM self-definition
This is harmless for now, but can be fatal for future refactors.
Fixes:
871b642adebe3 ("netdev: introduce ndo_set_rx_headroom")
Signed-off-by: Alexander Lobakin <alobakin@pm.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210218204908.5455-2-alobakin@pm.me
Grant Seltzer [Mon, 22 Feb 2021 19:58:46 +0000 (19:58 +0000)]
bpf: Add kernel/modules BTF presence checks to bpftool feature command
This adds both the CONFIG_DEBUG_INFO_BTF and CONFIG_DEBUG_INFO_BTF_MODULES
kernel compile option to output of the bpftool feature command.
This is relevant for developers that want to account for data structure
definition differences between kernels.
Signed-off-by: Grant Seltzer <grantseltzer@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210222195846.155483-1-grantseltzer@gmail.com
Linus Torvalds [Sun, 21 Feb 2021 20:49:32 +0000 (12:49 -0800)]
Merge tag 'perf-core-2021-02-17' of git://git./linux/kernel/git/tip/tip
Pull performance event updates from Ingo Molnar:
- Add CPU-PMU support for Intel Sapphire Rapids CPUs
- Extend the perf ABI with PERF_SAMPLE_WEIGHT_STRUCT, to offer
two-parameter sampling event feedback. Not used yet, but is intended
for Golden Cove CPU-PMU, which can provide both the instruction
latency and the cache latency information for memory profiling
events.
- Remove experimental, default-disabled perfmon-v4 counter_freezing
support that could only be enabled via a boot option. The hardware is
hopelessly broken, we'd like to make sure nobody starts relying on
this, as it would only end in tears.
- Fix energy/power events on Intel SPR platforms
- Simplify the uprobes resume_execution() logic
- Misc smaller fixes.
* tag 'perf-core-2021-02-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/rapl: Fix psys-energy event on Intel SPR platform
perf/x86/rapl: Only check lower 32bits for RAPL energy counters
perf/x86/rapl: Add msr mask support
perf/x86/kvm: Add Cascade Lake Xeon steppings to isolation_ucodes[]
perf/x86/intel: Support CPUID 10.ECX to disable fixed counters
perf/x86/intel: Add perf core PMU support for Sapphire Rapids
perf/x86/intel: Filter unsupported Topdown metrics event
perf/x86/intel: Factor out intel_update_topdown_event()
perf/core: Add PERF_SAMPLE_WEIGHT_STRUCT
perf/intel: Remove Perfmon-v4 counter_freezing support
x86/perf: Use static_call for x86_pmu.guest_get_msrs
perf/x86/intel/uncore: With > 8 nodes, get pci bus die id from NUMA info
perf/x86/intel/uncore: Store the logical die id instead of the physical die id.
x86/kprobes: Do not decode opcode in resume_execution()