Delyan Kratunov [Wed, 16 Mar 2022 23:37:31 +0000 (23:37 +0000)]
selftests/bpf: Test subskeleton functionality
This patch changes the selftests/bpf Makefile to also generate
a subskel.h for every skel.h it would have normally generated.
Separately, it also introduces a new subskeleton test which tests
library objects, externs, weak symbols, kconfigs, and user maps.
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1bd24956940bbbfe169bb34f7f87b11df52ef011.1647473511.git.delyank@fb.com
Delyan Kratunov [Wed, 16 Mar 2022 23:37:28 +0000 (23:37 +0000)]
bpftool: Add support for subskeletons
Subskeletons are headers which require an already loaded program to
operate.
For example, when a BPF library is linked into a larger BPF object file,
the library userspace needs a way to access its own global variables
without requiring knowledge about the larger program at build time.
As a result, subskeletons require a loaded bpf_object to open().
Further, they find their own symbols in the larger program by
walking BTF type data at run time.
At this time, programs, maps, and globals are supported through
non-owning pointers.
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/ca8a48b4841c72d285ecce82371bef4a899756cb.1647473511.git.delyank@fb.com
Delyan Kratunov [Wed, 16 Mar 2022 23:37:33 +0000 (23:37 +0000)]
libbpf: Add subskeleton scaffolding
In symmetry with bpf_object__open_skeleton(),
bpf_object__open_subskeleton() performs the actual walking and linking
of maps, progs, and globals described by bpf_*_skeleton objects.
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/6942a46fbe20e7ebf970affcca307ba616985b15.1647473511.git.delyank@fb.com
Delyan Kratunov [Wed, 16 Mar 2022 23:37:30 +0000 (23:37 +0000)]
libbpf: Init btf_{key,value}_type_id on internal map open
For internal and user maps, look up the key and value btf
types on open() and not load(), so that `bpf_map_btf_value_type_id`
is usable in `bpftool gen`.
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/78dbe4e457b4a05e098fc6c8f50014b680c86e4e.1647473511.git.delyank@fb.com
Delyan Kratunov [Wed, 16 Mar 2022 23:37:24 +0000 (23:37 +0000)]
libbpf: .text routines are subprograms in strict mode
Currently, libbpf considers a single routine in .text to be a program. This
is particularly confusing when it comes to library objects - a single routine
meant to be used as an extern will instead be considered a bpf_program.
This patch hides this compatibility behavior behind the pre-existing
SEC_NAME strict mode flag.
Signed-off-by: Delyan Kratunov <delyank@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/018de8d0d67c04bf436055270d35d394ba393505.1647473511.git.delyank@fb.com
Alexei Starovoitov [Fri, 18 Mar 2022 00:30:51 +0000 (17:30 -0700)]
Merge branch 'bpf: Add kprobe multi link'
Jiri Olsa says:
====================
hi,
this patchset adds new link type BPF_TRACE_KPROBE_MULTI that attaches
kprobe program through fprobe API [1] instroduced by Masami.
The fprobe API allows to attach probe on multiple functions at once very
fast, because it works on top of ftrace. On the other hand this limits
the probe point to the function entry or return.
With bpftrace support I see following attach speed:
# perf stat --null -r 5 ./src/bpftrace -e 'kprobe:x* { } i:ms:1 { exit(); } '
Attaching 2 probes...
Attaching 3342 functions
...
1.4960 +- 0.0285 seconds time elapsed ( +- 1.91% )
v3 changes:
- based on latest fprobe post from Masami [2]
- add acks
- add extra comment to kprobe_multi_link_handler wrt entry ip setup [Masami]
- keep swap_words_64 static and swap values directly in
bpf_kprobe_multi_cookie_swap [Andrii]
- rearrange locking/migrate setup in kprobe_multi_link_prog_run [Andrii]
- move uapi fields [Andrii]
- add bpf_program__attach_kprobe_multi_opts function [Andrii]
- many small test changes [Andrii]
- added tests for bpf_program__attach_kprobe_multi_opts
- make kallsyms_lookup_name check for empty string [Andrii]
v2 changes:
- based on latest fprobe changes [1]
- renaming the uapi interface to kprobe multi
- adding support for sort_r to pass user pointer for swap functions
and using that in cookie support to keep just single functions array
- moving new link to kernel/trace/bpf_trace.c file
- using single fprobe callback function for entry and exit
- using kvzalloc, libbpf_ensure_mem functions
- adding new k[ret]probe.multi sections instead of using current kprobe
- used glob_match from test_progs.c, added '?' matching
- move bpf_get_func_ip verifier inline change to seprate change
- couple of other minor fixes
Also available at:
https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
bpf/kprobe_multi
thanks,
jirka
[1] https://lore.kernel.org/bpf/
164458044634.586276.
3261555265565111183.stgit@devnote2/
[2] https://lore.kernel.org/bpf/
164735281449.1084943.
12438881786173547153.stgit@devnote2/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Wed, 16 Mar 2022 12:24:19 +0000 (13:24 +0100)]
selftests/bpf: Add cookie test for bpf_program__attach_kprobe_multi_opts
Adding bpf_cookie test for programs attached by
bpf_program__attach_kprobe_multi_opts API.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-14-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:18 +0000 (13:24 +0100)]
selftests/bpf: Add attach test for bpf_program__attach_kprobe_multi_opts
Adding tests for bpf_program__attach_kprobe_multi_opts function,
that test attach with pattern, symbols and addrs.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-13-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:17 +0000 (13:24 +0100)]
selftests/bpf: Add kprobe_multi bpf_cookie test
Adding bpf_cookie test for programs attached by kprobe_multi links.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-12-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:16 +0000 (13:24 +0100)]
selftests/bpf: Add kprobe_multi attach test
Adding kprobe_multi attach test that uses new fprobe interface to
attach kprobe program to multiple functions.
The test is attaching programs to bpf_fentry_test* functions and
uses single trampoline program bpf_prog_test_run to trigger
bpf_fentry_test* functions.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-11-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:15 +0000 (13:24 +0100)]
libbpf: Add bpf_program__attach_kprobe_multi_opts function
Adding bpf_program__attach_kprobe_multi_opts function for attaching
kprobe program to multiple functions.
struct bpf_link *
bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
const char *pattern,
const struct bpf_kprobe_multi_opts *opts);
User can specify functions to attach with 'pattern' argument that
allows wildcards (*?' supported) or provide symbols or addresses
directly through opts argument. These 3 options are mutually
exclusive.
When using symbols or addresses, user can also provide cookie value
for each symbol/address that can be retrieved later in bpf program
with bpf_get_attach_cookie helper.
struct bpf_kprobe_multi_opts {
size_t sz;
const char **syms;
const unsigned long *addrs;
const __u64 *cookies;
size_t cnt;
bool retprobe;
size_t :0;
};
Symbols, addresses and cookies are provided through opts object
(syms/addrs/cookies) as array pointers with specified count (cnt).
Each cookie value is paired with provided function address or symbol
with the same array index.
The program can be also attached as return probe if 'retprobe' is set.
For quick usage with NULL opts argument, like:
bpf_program__attach_kprobe_multi_opts(prog, "ksys_*", NULL)
the 'prog' will be attached as kprobe to 'ksys_*' functions.
Also adding new program sections for automatic attachment:
kprobe.multi/<symbol_pattern>
kretprobe.multi/<symbol_pattern>
The symbol_pattern is used as 'pattern' argument in
bpf_program__attach_kprobe_multi_opts function.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-10-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:14 +0000 (13:24 +0100)]
libbpf: Add bpf_link_create support for multi kprobes
Adding new kprobe_multi struct to bpf_link_create_opts object
to pass multiple kprobe data to link_create attr uapi.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-9-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:13 +0000 (13:24 +0100)]
libbpf: Add libbpf_kallsyms_parse function
Move the kallsyms parsing in internal libbpf_kallsyms_parse
function, so it can be used from other places.
It will be used in following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-8-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:12 +0000 (13:24 +0100)]
bpf: Add cookie support to programs attached with kprobe multi link
Adding support to call bpf_get_attach_cookie helper from
kprobe programs attached with kprobe multi link.
The cookie is provided by array of u64 values, where each
value is paired with provided function address or symbol
with the same array index.
When cookie array is provided it's sorted together with
addresses (check bpf_kprobe_multi_cookie_swap). This way
we can find cookie based on the address in
bpf_get_attach_cookie helper.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-7-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:11 +0000 (13:24 +0100)]
bpf: Add support to inline bpf_get_func_ip helper on x86
Adding support to inline it on x86, because it's single
load instruction.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-6-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:10 +0000 (13:24 +0100)]
bpf: Add bpf_get_func_ip kprobe helper for multi kprobe link
Adding support to call bpf_get_func_ip helper from kprobe
programs attached by multi kprobe link.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-5-jolsa@kernel.org
Alexei Starovoitov [Fri, 18 Mar 2022 00:23:13 +0000 (17:23 -0700)]
Merge branch 'fprobe: Introduce fprobe function entry/exit probe'
Masami Hiramatsu says:
====================
Hi,
Here is the 12th version of fprobe. This version fixes a possible gcc-11 issue which
was reported as kretprobes on arm issue, and also I updated the fprobe document.
The previous version (v11) is here[1];
[1] https://lore.kernel.org/all/
164701432038.268462.
3329725152949938527.stgit@devnote2/T/#u
This series introduces the fprobe, the function entry/exit probe
with multiple probe point support for x86, arm64 and powerpc64le.
This also introduces the rethook for hooking function return as same as
the kretprobe does. This abstraction will help us to generalize the fgraph
tracer, because we can just switch to it from the rethook in fprobe,
depending on the kernel configuration.
The patch [1/12] is from Jiri's series[2].
[2] https://lore.kernel.org/all/
20220104080943.113249-1-jolsa@kernel.org/T/#u
And the patch [9/10] adds the FPROBE_FL_KPROBE_SHARED flag for the case
if user wants to share the same code (or share a same resource) on the
fprobe and the kprobes.
I forcibly updated my kprobes/fprobe branch, you can pull this series
from:
https://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git kprobes/fprobe
Thank you,
---
Jiri Olsa (1):
ftrace: Add ftrace_set_filter_ips function
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Wed, 16 Mar 2022 12:24:09 +0000 (13:24 +0100)]
bpf: Add multi kprobe link
Adding new link type BPF_LINK_TYPE_KPROBE_MULTI that attaches kprobe
program through fprobe API.
The fprobe API allows to attach probe on multiple functions at once
very fast, because it works on top of ftrace. On the other hand this
limits the probe point to the function entry or return.
The kprobe program gets the same pt_regs input ctx as when it's attached
through the perf API.
Adding new attach type BPF_TRACE_KPROBE_MULTI that allows attachment
kprobe to multiple function with new link.
User provides array of addresses or symbols with count to attach the
kprobe program to. The new link_create uapi interface looks like:
struct {
__u32 flags;
__u32 cnt;
__aligned_u64 syms;
__aligned_u64 addrs;
} kprobe_multi;
The flags field allows single BPF_TRACE_KPROBE_MULTI bit to create
return multi kprobe.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-4-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:08 +0000 (13:24 +0100)]
kallsyms: Skip the name search for empty string
When kallsyms_lookup_name is called with empty string,
it will do futile search for it through all the symbols.
Skipping the search for empty string.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-3-jolsa@kernel.org
Jiri Olsa [Wed, 16 Mar 2022 12:24:07 +0000 (13:24 +0100)]
lib/sort: Add priv pointer to swap function
Adding support to have priv pointer in swap callback function.
Following the initial change on cmp callback functions [1]
and adding SWAP_WRAPPER macro to identify sort call of sort_r.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-2-jolsa@kernel.org
[1]
4333fb96ca10 ("media: lib/sort.c: implement sort() variant taking context argument")
Masami Hiramatsu [Tue, 15 Mar 2022 14:02:35 +0000 (23:02 +0900)]
fprobe: Add a selftest for fprobe
Add a KUnit based selftest for fprobe interface.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735295554.1084943.18347620679928750960.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:02:22 +0000 (23:02 +0900)]
docs: fprobe: Add fprobe description to ftrace-use.rst
Add a documentation of fprobe for the user who needs
this interface.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735294272.1084943.12372175959382037397.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:02:11 +0000 (23:02 +0900)]
fprobe: Introduce FPROBE_FL_KPROBE_SHARED flag for fprobe
Introduce FPROBE_FL_KPROBE_SHARED flag for sharing fprobe callback with
kprobes safely from the viewpoint of recursion.
Since the recursion safety of the fprobe (and ftrace) is a bit different
from the kprobes, this may cause an issue if user wants to run the same
code from the fprobe and the kprobes.
The kprobes has per-cpu 'current_kprobe' variable which protects the
kprobe handler from recursion in any case. On the other hand, the fprobe
uses only ftrace_test_recursion_trylock(), which will allow interrupt
context calls another (or same) fprobe during the fprobe user handler is
running.
This is not a matter in cases if the common callback shared among the
kprobes and the fprobe has its own recursion detection, or it can handle
the recursion in the different contexts (normal/interrupt/NMI.)
But if it relies on the 'current_kprobe' recursion lock, it has to check
kprobe_running() and use kprobe_busy_*() APIs.
Fprobe has FPROBE_FL_KPROBE_SHARED flag to do this. If your common callback
code will be shared with kprobes, please set FPROBE_FL_KPROBE_SHARED
*before* registering the fprobe, like;
fprobe.flags = FPROBE_FL_KPROBE_SHARED;
register_fprobe(&fprobe, "func*", NULL);
This will protect your common callback from the nested call.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735293127.1084943.15687374237275817599.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:02:00 +0000 (23:02 +0900)]
fprobe: Add sample program for fprobe
Add a sample program for the fprobe. The sample_fprobe puts a fprobe on
kernel_clone() by default. This dump stack and some called address info
at the function entry and exit.
The sample_fprobe.ko gets 2 parameters.
- symbol: you can specify the comma separated symbols or wildcard symbol
pattern (in this case you can not use comma)
- stackdump: a bool value to enable or disable stack dump in the fprobe
handler.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735291987.1084943.4449670993752806840.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:01:48 +0000 (23:01 +0900)]
fprobe: Add exit_handler support
Add exit_handler to fprobe. fprobe + rethook allows us to hook the kernel
function return. The rethook will be enabled only if the
fprobe::exit_handler is set.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735290790.1084943.10601965782208052202.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:01:36 +0000 (23:01 +0900)]
ARM: rethook: Add rethook arm implementation
Add rethook arm implementation. Most of the code has been copied from
kretprobes on arm.
Since the arm's ftrace implementation is a bit special, this needs a
special care using from fprobe.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735289643.1084943.15184590256680485720.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:01:25 +0000 (23:01 +0900)]
powerpc: Add rethook support
Add rethook powerpc64 implementation. Most of the code has been copied from
kretprobes on powerpc64.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735288495.1084943.539630613772422267.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:01:13 +0000 (23:01 +0900)]
arm64: rethook: Add arm64 rethook implementation
Add rethook arm64 implementation. Most of the code has been copied from
kretprobes on arm64.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735287344.1084943.9787335632585653418.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:01:02 +0000 (23:01 +0900)]
rethook: x86: Add rethook x86 implementation
Add rethook for x86 implementation. Most of the code has been copied from
kretprobes on x86.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735286243.1084943.7477055110527046644.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:00:50 +0000 (23:00 +0900)]
rethook: Add a generic return hook
Add a return hook framework which hooks the function return. Most of the
logic came from the kretprobe, but this is independent from kretprobe.
Note that this is expected to be used with other function entry hooking
feature, like ftrace, fprobe, adn kprobes. Eventually this will replace
the kretprobe (e.g. kprobe + rethook = kretprobe), but at this moment,
this is just an additional hook.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735285066.1084943.9259661137330166643.stgit@devnote2
Masami Hiramatsu [Tue, 15 Mar 2022 14:00:38 +0000 (23:00 +0900)]
fprobe: Add ftrace based probe APIs
The fprobe is a wrapper API for ftrace function tracer.
Unlike kprobes, this probes only supports the function entry, but this
can probe multiple functions by one fprobe. The usage is similar, user
will set their callback to fprobe::entry_handler and call
register_fprobe*() with probed functions.
There are 3 registration interfaces,
- register_fprobe() takes filtering patterns of the functin names.
- register_fprobe_ips() takes an array of ftrace-location addresses.
- register_fprobe_syms() takes an array of function names.
The registered fprobes can be unregistered with unregister_fprobe().
e.g.
struct fprobe fp = { .entry_handler = user_handler };
const char *targets[] = { "func1", "func2", "func3"};
...
ret = register_fprobe_syms(&fp, targets, ARRAY_SIZE(targets));
...
unregister_fprobe(&fp);
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735283857.1084943.1154436951479395551.stgit@devnote2
Jiri Olsa [Tue, 15 Mar 2022 14:00:26 +0000 (23:00 +0900)]
ftrace: Add ftrace_set_filter_ips function
Adding ftrace_set_filter_ips function to be able to set filter on
multiple ip addresses at once.
With the kprobe multi attach interface we have cases where we need to
initialize ftrace_ops object with thousands of functions, so having
single function diving into ftrace_hash_move_and_update_ops with
ftrace_lock is faster.
The functions ips are passed as unsigned long array with count.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/164735282673.1084943.18310504594134769804.stgit@devnote2
Kaixi Fan [Sun, 13 Mar 2022 16:41:16 +0000 (00:41 +0800)]
selftests/bpf: Fix tunnel remote IP comments
In namespace at_ns0, the IP address of tnl dev is 10.1.1.100 which is the
overlay IP, and the ip address of veth0 is 172.16.1.100 which is the vtep
IP. When doing 'ping 10.1.1.100' from root namespace, the remote_ip should
be 172.16.1.100.
Fixes:
933a741e3b82 ("selftests/bpf: bpf tunnel test.")
Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220313164116.5889-1-fankaixi.li@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Fri, 11 Mar 2022 09:14:20 +0000 (10:14 +0100)]
veth: Allow jumbo frames in xdp mode
Allow increasing the MTU over page boundaries on veth devices
if the attached xdp program declares to support xdp fragments.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/d5dc039c3d4123426e7023a488c449181a7bc57f.1646989407.git.lorenzo@kernel.org
Lorenzo Bianconi [Fri, 11 Mar 2022 09:14:19 +0000 (10:14 +0100)]
veth: Rework veth_xdp_rcv_skb in order to accept non-linear skb
Introduce veth_convert_skb_to_xdp_buff routine in order to
convert a non-linear skb into a xdp buffer. If the received skb
is cloned or shared, veth_convert_skb_to_xdp_buff will copy it
in a new skb composed by order-0 pages for the linear and the
fragmented area. Moreover veth_convert_skb_to_xdp_buff guarantees
we have enough headroom for xdp.
This is a preliminary patch to allow attaching xdp programs with frags
support on veth devices.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/8d228b106bc1903571afd1d77e797bffe9a5ea7c.1646989407.git.lorenzo@kernel.org
Lorenzo Bianconi [Fri, 11 Mar 2022 09:14:18 +0000 (10:14 +0100)]
net: veth: Account total xdp_frame len running ndo_xdp_xmit
Even if this is a theoretical issue since it is not possible to perform
XDP_REDIRECT on a non-linear xdp_frame, veth driver does not account
paged area in ndo_xdp_xmit function pointer.
Introduce xdp_get_frame_len utility routine to get the xdp_frame full
length and account total frame size running XDP_REDIRECT of a
non-linear xdp frame into a veth device.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/54f9fd3bb65d190daf2c0bbae2f852ff16cfbaa0.1646989407.git.lorenzo@kernel.org
Hou Tao [Wed, 9 Mar 2022 12:33:21 +0000 (20:33 +0800)]
selftests/bpf: Test subprog jit when toggle bpf_jit_harden repeatedly
When bpf_jit_harden is toggled between 0 and 2, subprog jit may fail
due to inconsistent twice read values of bpf_jit_harden during jit. So
add a test to ensure the problem is fixed.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220309123321.2400262-5-houtao1@huawei.com
Hou Tao [Wed, 9 Mar 2022 12:33:20 +0000 (20:33 +0800)]
bpf: Fix net.core.bpf_jit_harden race
It is the bpf_jit_harden counterpart to commit
60b58afc96c9 ("bpf: fix
net.core.bpf_jit_enable race"). bpf_jit_harden will be tested twice
for each subprog if there are subprogs in bpf program and constant
blinding may increase the length of program, so when running
"./test_progs -t subprogs" and toggling bpf_jit_harden between 0 and 2,
jit_subprogs may fail because constant blinding increases the length
of subprog instructions during extra passs.
So cache the value of bpf_jit_blinding_enabled() during program
allocation, and use the cached value during constant blinding, subprog
JITing and args tracking of tail call.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220309123321.2400262-4-houtao1@huawei.com
Hou Tao [Wed, 9 Mar 2022 12:33:18 +0000 (20:33 +0800)]
bpf, x86: Fall back to interpreter mode when extra pass fails
Extra pass for subprog jit may fail (e.g. due to bpf_jit_harden race),
but bpf_func is not cleared for the subprog and jit_subprogs will
succeed. The running of the bpf program may lead to oops because the
memory for the jited subprog image has already been freed.
So fall back to interpreter mode by clearing bpf_func/jited/jited_len
when extra pass fails.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220309123321.2400262-2-houtao1@huawei.com
Alexei Starovoitov [Wed, 16 Mar 2022 22:07:49 +0000 (15:07 -0700)]
Merge branch 'Remove libcap dependency from bpf selftests'
Martin KaFai Lau says:
====================
After upgrading to the newer libcap (>= 2.60),
the libcap commit
aca076443591 ("Make cap_t operations thread safe.")
added a "__u8 mutex;" to the "struct _cap_struct". It caused a few byte
shift that breaks the assumption made in the "struct libcap" definition
in test_verifier.c.
This set is to remove the libcap dependency from the bpf selftests.
v2:
- Define CAP_PERFMON and CAP_BPF when the older <linux/capability.h>
does not have them. (Andrii)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Martin KaFai Lau [Wed, 16 Mar 2022 17:38:35 +0000 (10:38 -0700)]
bpf: selftests: Remove libcap usage from test_progs
This patch removes the libcap usage from test_progs.
bind_perm.c is the only user. cap_*_effective() helpers added in the
earlier patch are directly used instead.
No other selftest binary is using libcap, so '-lcap' is also removed
from the Makefile.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220316173835.2039334-1-kafai@fb.com
Martin KaFai Lau [Wed, 16 Mar 2022 17:38:29 +0000 (10:38 -0700)]
bpf: selftests: Remove libcap usage from test_verifier
This patch removes the libcap usage from test_verifier.
The cap_*_effective() helpers added in the earlier patch are
used instead.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220316173829.2038682-1-kafai@fb.com
Martin KaFai Lau [Wed, 16 Mar 2022 17:38:23 +0000 (10:38 -0700)]
bpf: selftests: Add helpers to directly use the capget and capset syscall
After upgrading to the newer libcap (>= 2.60),
the libcap commit
aca076443591 ("Make cap_t operations thread safe.")
added a "__u8 mutex;" to the "struct _cap_struct". It caused a few byte
shift that breaks the assumption made in the "struct libcap" definition
in test_verifier.c.
The bpf selftest usage only needs to enable and disable the effective
caps of the running task. It is easier to directly syscall the
capget and capset instead. It can also remove the libcap
library dependency.
The cap_helpers.{c,h} is added. One __u64 is used for all CAP_*
bits instead of two __u32.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220316173823.2036955-1-kafai@fb.com
Daniel Xu [Sun, 13 Mar 2022 23:19:46 +0000 (16:19 -0700)]
bpftool: man: Add missing top level docs
The top-level (bpftool.8) man page was missing docs for a few
subcommands and their respective sub-sub-commands.
This commit brings the top level man page up to date. Note that I've
kept the ordering of the subcommands the same as in `bpftool help`.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/3049ef5dc509c0d1832f0a8b2dba2ccaad0af688.1647213551.git.dxu@dxuuu.xyz
Dmitrii Dolgov [Wed, 9 Mar 2022 16:31:12 +0000 (17:31 +0100)]
bpftool: Add bpf_cookie to link output
Commit
82e6b1eee6a8 ("bpf: Allow to specify user-provided bpf_cookie for
BPF perf links") introduced the concept of user specified bpf_cookie,
which could be accessed by BPF programs using bpf_get_attach_cookie().
For troubleshooting purposes it is convenient to expose bpf_cookie via
bpftool as well, so there is no need to meddle with the target BPF
program itself.
Implemented using the pid iterator BPF program to actually fetch
bpf_cookies, which allows constraining code changes only to bpftool.
$ bpftool link
1: type 7 prog 5
bpf_cookie 123
pids bootstrap(81)
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20220309163112.24141-1-9erthalion6@gmail.com
Guo Zhengkui [Tue, 15 Mar 2022 13:01:26 +0000 (21:01 +0800)]
selftests/bpf: Clean up array_size.cocci warnings
Clean up the array_size.cocci warnings under tools/testing/selftests/bpf/:
Use `ARRAY_SIZE(arr)` instead of forms like `sizeof(arr)/sizeof(arr[0])`.
tools/testing/selftests/bpf/test_cgroup_storage.c uses ARRAY_SIZE() defined
in tools/include/linux/kernel.h (sys/sysinfo.h -> linux/kernel.h), while
others use ARRAY_SIZE() in bpf_util.h.
Signed-off-by: Guo Zhengkui <guozhengkui@vivo.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220315130143.2403-1-guozhengkui@vivo.com
Niklas Söderlund [Tue, 15 Mar 2022 10:29:48 +0000 (11:29 +0100)]
samples/bpf, xdpsock: Fix race when running for fix duration of time
When running xdpsock for a fix duration of time before terminating
using --duration=<n>, there is a race condition that may cause xdpsock
to terminate immediately.
When running for a fixed duration of time the check to determine when to
terminate execution is in is_benchmark_done() and is being executed in
the context of the poller thread,
if (opt_duration > 0) {
unsigned long dt = (get_nsecs() - start_time);
if (dt >= opt_duration)
benchmark_done = true;
}
However start_time is only set after the poller thread have been
created. This leaves a small window when the poller thread is starting
and calls is_benchmark_done() for the first time that start_time is not
yet set. In that case start_time have its initial value of 0 and the
duration check fails as it do not correlate correctly for the
applications start time and immediately sets benchmark_done which in
turn terminates the xdpsock application.
Fix this by setting start_time before creating the poller thread.
Fixes:
d3f11b018f6c ("samples/bpf: xdpsock: Add duration option to specify how long to run")
Signed-off-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220315102948.466436-1-niklas.soderlund@corigine.com
Wang Yufen [Fri, 4 Mar 2022 08:11:45 +0000 (16:11 +0800)]
bpf, sockmap: Fix double uncharge the mem of sk_msg
If tcp_bpf_sendmsg is running during a tear down operation, psock may be
freed.
tcp_bpf_sendmsg()
tcp_bpf_send_verdict()
sk_msg_return()
tcp_bpf_sendmsg_redir()
unlikely(!psock))
sk_msg_free()
The mem of msg has been uncharged in tcp_bpf_send_verdict() by
sk_msg_return(), and would be uncharged by sk_msg_free() again. When psock
is null, we can simply returning an error code, this would then trigger
the sk_msg_free_nocharge in the error path of __SK_REDIRECT and would have
the side effect of throwing an error up to user space. This would be a
slight change in behavior from user side but would look the same as an
error if the redirect on the socket threw an error.
This issue can cause the following info:
WARNING: CPU: 0 PID: 2136 at net/ipv4/af_inet.c:155 inet_sock_destruct+0x13c/0x260
Call Trace:
<TASK>
__sk_destruct+0x24/0x1f0
sk_psock_destroy+0x19b/0x1c0
process_one_work+0x1b3/0x3c0
worker_thread+0x30/0x350
? process_one_work+0x3c0/0x3c0
kthread+0xe6/0x110
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x22/0x30
</TASK>
Fixes:
604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220304081145.2037182-5-wangyufen@huawei.com
Wang Yufen [Fri, 4 Mar 2022 08:11:44 +0000 (16:11 +0800)]
bpf, sockmap: Fix more uncharged while msg has more_data
In tcp_bpf_send_verdict(), if msg has more data after
tcp_bpf_sendmsg_redir():
tcp_bpf_send_verdict()
tosend = msg->sg.size //msg->sg.size = 22220
case __SK_REDIRECT:
sk_msg_return() //uncharged msg->sg.size(22220) sk->sk_forward_alloc
tcp_bpf_sendmsg_redir() //after tcp_bpf_sendmsg_redir, msg->sg.size=11000
goto more_data;
tosend = msg->sg.size //msg->sg.size = 11000
case __SK_REDIRECT:
sk_msg_return() //uncharged msg->sg.size(11000) to sk->sk_forward_alloc
The msg->sg.size(11000) has been uncharged twice, to fix we can charge the
remaining msg->sg.size before goto more data.
This issue can cause the following info:
WARNING: CPU: 0 PID: 9860 at net/core/stream.c:208 sk_stream_kill_queues+0xd4/0x1a0
Call Trace:
<TASK>
inet_csk_destroy_sock+0x55/0x110
__tcp_close+0x279/0x470
tcp_close+0x1f/0x60
inet_release+0x3f/0x80
__sock_release+0x3d/0xb0
sock_close+0x11/0x20
__fput+0x92/0x250
task_work_run+0x6a/0xa0
do_exit+0x33b/0xb60
do_group_exit+0x2f/0xa0
get_signal+0xb6/0x950
arch_do_signal_or_restart+0xac/0x2a0
? vfs_write+0x237/0x290
exit_to_user_mode_prepare+0xa9/0x200
syscall_exit_to_user_mode+0x12/0x30
do_syscall_64+0x46/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
</TASK>
WARNING: CPU: 0 PID: 2136 at net/ipv4/af_inet.c:155 inet_sock_destruct+0x13c/0x260
Call Trace:
<TASK>
__sk_destruct+0x24/0x1f0
sk_psock_destroy+0x19b/0x1c0
process_one_work+0x1b3/0x3c0
worker_thread+0x30/0x350
? process_one_work+0x3c0/0x3c0
kthread+0xe6/0x110
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x22/0x30
</TASK>
Fixes:
604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220304081145.2037182-4-wangyufen@huawei.com
Wang Yufen [Fri, 4 Mar 2022 08:11:43 +0000 (16:11 +0800)]
bpf, sockmap: Fix memleak in tcp_bpf_sendmsg while sk msg is full
If tcp_bpf_sendmsg() is running while sk msg is full. When sk_msg_alloc()
returns -ENOMEM error, tcp_bpf_sendmsg() goes to wait_for_memory. If partial
memory has been alloced by sk_msg_alloc(), that is, msg_tx->sg.size is
greater than osize after sk_msg_alloc(), memleak occurs. To fix we use
sk_msg_trim() to release the allocated memory, then goto wait for memory.
Other call paths of sk_msg_alloc() have the similar issue, such as
tls_sw_sendmsg(), so handle sk_msg_trim logic inside sk_msg_alloc(),
as Cong Wang suggested.
This issue can cause the following info:
WARNING: CPU: 3 PID: 7950 at net/core/stream.c:208 sk_stream_kill_queues+0xd4/0x1a0
Call Trace:
<TASK>
inet_csk_destroy_sock+0x55/0x110
__tcp_close+0x279/0x470
tcp_close+0x1f/0x60
inet_release+0x3f/0x80
__sock_release+0x3d/0xb0
sock_close+0x11/0x20
__fput+0x92/0x250
task_work_run+0x6a/0xa0
do_exit+0x33b/0xb60
do_group_exit+0x2f/0xa0
get_signal+0xb6/0x950
arch_do_signal_or_restart+0xac/0x2a0
exit_to_user_mode_prepare+0xa9/0x200
syscall_exit_to_user_mode+0x12/0x30
do_syscall_64+0x46/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
</TASK>
WARNING: CPU: 3 PID: 2094 at net/ipv4/af_inet.c:155 inet_sock_destruct+0x13c/0x260
Call Trace:
<TASK>
__sk_destruct+0x24/0x1f0
sk_psock_destroy+0x19b/0x1c0
process_one_work+0x1b3/0x3c0
kthread+0xe6/0x110
ret_from_fork+0x22/0x30
</TASK>
Fixes:
604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220304081145.2037182-3-wangyufen@huawei.com
Wang Yufen [Fri, 4 Mar 2022 08:11:42 +0000 (16:11 +0800)]
bpf, sockmap: Fix memleak in sk_psock_queue_msg
If tcp_bpf_sendmsg is running during a tear down operation we may enqueue
data on the ingress msg queue while tear down is trying to free it.
sk1 (redirect sk2) sk2
------------------- ---------------
tcp_bpf_sendmsg()
tcp_bpf_send_verdict()
tcp_bpf_sendmsg_redir()
bpf_tcp_ingress()
sock_map_close()
lock_sock()
lock_sock() ... blocking
sk_psock_stop
sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED);
release_sock(sk);
lock_sock()
sk_mem_charge()
get_page()
sk_psock_queue_msg()
sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED);
drop_sk_msg()
release_sock()
While drop_sk_msg(), the msg has charged memory form sk by sk_mem_charge
and has sg pages need to put. To fix we use sk_msg_free() and then kfee()
msg.
This issue can cause the following info:
WARNING: CPU: 0 PID: 9202 at net/core/stream.c:205 sk_stream_kill_queues+0xc8/0xe0
Call Trace:
<IRQ>
inet_csk_destroy_sock+0x55/0x110
tcp_rcv_state_process+0xe5f/0xe90
? sk_filter_trim_cap+0x10d/0x230
? tcp_v4_do_rcv+0x161/0x250
tcp_v4_do_rcv+0x161/0x250
tcp_v4_rcv+0xc3a/0xce0
ip_protocol_deliver_rcu+0x3d/0x230
ip_local_deliver_finish+0x54/0x60
ip_local_deliver+0xfd/0x110
? ip_protocol_deliver_rcu+0x230/0x230
ip_rcv+0xd6/0x100
? ip_local_deliver+0x110/0x110
__netif_receive_skb_one_core+0x85/0xa0
process_backlog+0xa4/0x160
__napi_poll+0x29/0x1b0
net_rx_action+0x287/0x300
__do_softirq+0xff/0x2fc
do_softirq+0x79/0x90
</IRQ>
WARNING: CPU: 0 PID: 531 at net/ipv4/af_inet.c:154 inet_sock_destruct+0x175/0x1b0
Call Trace:
<TASK>
__sk_destruct+0x24/0x1f0
sk_psock_destroy+0x19b/0x1c0
process_one_work+0x1b3/0x3c0
? process_one_work+0x3c0/0x3c0
worker_thread+0x30/0x350
? process_one_work+0x3c0/0x3c0
kthread+0xe6/0x110
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x22/0x30
</TASK>
Fixes:
9635720b7c88 ("bpf, sockmap: Fix memleak on ingress msg enqueue")
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220304081145.2037182-2-wangyufen@huawei.com
Yonghong Song [Fri, 11 Mar 2022 00:37:21 +0000 (16:37 -0800)]
selftests/bpf: Fix a clang compilation error for send_signal.c
Building selftests/bpf with latest clang compiler (clang15 built
from source), I hit the following compilation error:
/.../prog_tests/send_signal.c:43:16: error: variable 'j' set but not used [-Werror,-Wunused-but-set-variable]
volatile int j = 0;
^
1 error generated.
The problem also exists with clang13 and clang14. clang12 is okay.
In send_signal.c, we have the following code ...
volatile int j = 0;
[...]
for (int i = 0; i <
100000000 && !sigusr1_received; i++)
j /= i + 1;
... to burn CPU cycles so bpf_send_signal() helper can be tested
in NMI mode.
Slightly changing 'j /= i + 1' to 'j /= i + j + 1' or 'j++' can
fix the problem. Further investigation indicated this should be
a clang bug ([1]). The upstream fix will be proposed later. But it
is a good idea to workaround the issue to unblock people who build
kernel/selftests with clang.
[1] https://discourse.llvm.org/t/strange-clang-unused-but-set-variable-error-with-volatile-variables/60841
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220311003721.2177170-1-yhs@fb.com
Toke Høiland-Jørgensen [Thu, 10 Mar 2022 22:56:21 +0000 (23:56 +0100)]
selftests/bpf: Add a test for maximum packet size in xdp_do_redirect
This adds an extra test to the xdp_do_redirect selftest for XDP live packet
mode, which verifies that the maximum permissible packet size is accepted
without any errors, and that a too big packet is correctly rejected.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220310225621.53374-2-toke@redhat.com
Toke Høiland-Jørgensen [Thu, 10 Mar 2022 22:56:20 +0000 (23:56 +0100)]
bpf, test_run: Fix packet size check for live packet mode
The live packet mode uses some extra space at the start of each page to
cache data structures so they don't have to be rebuilt at every repetition.
This space wasn't correctly accounted for in the size checking of the
arguments supplied to userspace. In addition, the definition of the frame
size should include the size of the skb_shared_info (as there is other
logic that subtracts the size of this).
Together, these mistakes resulted in userspace being able to trip the
XDP_WARN() in xdp_update_frame_from_buff(), which syzbot discovered in
short order. Fix this by changing the frame size define and adding the
extra headroom to the bpf_prog_test_run_xdp() function. Also drop the
max_len parameter to the page_pool init, since this is related to DMA which
is not used for the page pool instance in PROG_TEST_RUN.
Fixes:
b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN")
Reported-by: syzbot+0e91362d99386dc5de99@syzkaller.appspotmail.com
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220310225621.53374-1-toke@redhat.com
Hao Luo [Thu, 10 Mar 2022 21:16:55 +0000 (13:16 -0800)]
compiler_types: Refactor the use of btf_type_tag attribute.
Previous patches have introduced the compiler attribute btf_type_tag for
__user and __percpu. The availability of this attribute depends on
some CONFIGs and compiler support. This patch refactors the use
of btf_type_tag by introducing BTF_TYPE_TAG, which hides all the
dependencies.
No functional change.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220310211655.3173786-1-haoluo@google.com
Alexei Starovoitov [Fri, 11 Mar 2022 02:58:04 +0000 (18:58 -0800)]
Merge branch 'bpf-lsm: Extend interoperability with IMA'
Roberto Sassu says:
====================
Extend the interoperability with IMA, to give wider flexibility for the
implementation of integrity-focused LSMs based on eBPF.
Patch 1 fixes some style issues.
Patches 2-6 give the ability to eBPF-based LSMs to take advantage of the
measurement capability of IMA without needing to setup a policy in IMA
(those LSMs might implement the policy capability themselves).
Patches 7-9 allow eBPF-based LSMs to evaluate files read by the kernel.
Changelog
v2:
- Add better description to patch 1 (suggested by Shuah)
- Recalculate digest if it is not fresh (when IMA_COLLECTED flag not set)
- Move declaration of bpf_ima_file_hash() at the end (suggested by
Yonghong)
- Add tests to check if the digest has been recalculated
- Add deny test for bpf_kernel_read_file()
- Add description to tests
v1:
- Modify ima_file_hash() only and allow the usage of the function with the
modified behavior by eBPF-based LSMs through the new function
bpf_ima_file_hash() (suggested by Mimi)
- Make bpf_lsm_kernel_read_file() sleepable so that bpf_ima_inode_hash()
and bpf_ima_file_hash() can be called inside the implementation of
eBPF-based LSMs for this hook
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Wed, 2 Mar 2022 11:14:04 +0000 (12:14 +0100)]
selftests/bpf: Check that bpf_kernel_read_file() denies reading IMA policy
Check that bpf_kernel_read_file() denies the reading of an IMA policy, by
ensuring that ima_setup.sh exits with an error.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-10-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:14:03 +0000 (12:14 +0100)]
selftests/bpf: Add test for bpf_lsm_kernel_read_file()
Test the ability of bpf_lsm_kernel_read_file() to call the sleepable
functions bpf_ima_inode_hash() or bpf_ima_file_hash() to obtain a
measurement of a loaded IMA policy.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-9-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:14:02 +0000 (12:14 +0100)]
bpf-lsm: Make bpf_lsm_kernel_read_file() as sleepable
Make bpf_lsm_kernel_read_file() as sleepable, so that bpf_ima_inode_hash()
or bpf_ima_file_hash() can be called inside the implementation of this
hook.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-8-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:14:01 +0000 (12:14 +0100)]
selftests/bpf: Check if the digest is refreshed after a file write
Verify that bpf_ima_inode_hash() returns a non-fresh digest after a file
write, and that bpf_ima_file_hash() returns a fresh digest. Verification is
done by requesting the digest from the bprm_creds_for_exec hook, called
before ima_bprm_check().
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-7-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:14:00 +0000 (12:14 +0100)]
selftests/bpf: Add test for bpf_ima_file_hash()
Add new test to ensure that bpf_ima_file_hash() returns the digest of the
executed files.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-6-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:13:59 +0000 (12:13 +0100)]
selftests/bpf: Move sample generation code to ima_test_common()
Move sample generator code to ima_test_common() so that the new function
can be called by multiple LSM hooks.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-5-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:13:58 +0000 (12:13 +0100)]
bpf-lsm: Introduce new helper bpf_ima_file_hash()
ima_file_hash() has been modified to calculate the measurement of a file on
demand, if it has not been already performed by IMA or the measurement is
not fresh. For compatibility reasons, ima_inode_hash() remains unchanged.
Keep the same approach in eBPF and introduce the new helper
bpf_ima_file_hash() to take advantage of the modified behavior of
ima_file_hash().
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220302111404.193900-4-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:13:57 +0000 (12:13 +0100)]
ima: Always return a file measurement in ima_file_hash()
__ima_inode_hash() checks if a digest has been already calculated by
looking for the integrity_iint_cache structure associated to the passed
inode.
Users of ima_file_hash() (e.g. eBPF) might be interested in obtaining the
information without having to setup an IMA policy so that the digest is
always available at the time they call this function.
In addition, they likely expect the digest to be fresh, e.g. recalculated
by IMA after a file write. Although getting the digest from the
bprm_committed_creds hook (as in the eBPF test) ensures that the digest is
fresh, as the IMA hook is executed before that hook, this is not always the
case (e.g. for the mmap_file hook).
Call ima_collect_measurement() in __ima_inode_hash(), if the file
descriptor is available (passed by ima_file_hash()) and the digest is not
available/not fresh, and store the file measurement in a temporary
integrity_iint_cache structure.
This change does not cause memory usage increase, due to using the
temporary integrity_iint_cache structure, and due to freeing the
ima_digest_data structure inside integrity_iint_cache before exiting from
__ima_inode_hash().
For compatibility reasons, the behavior of ima_inode_hash() remains
unchanged.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Mimi Zohar <zohar@linux.ibm.com>
Link: https://lore.kernel.org/bpf/20220302111404.193900-3-roberto.sassu@huawei.com
Roberto Sassu [Wed, 2 Mar 2022 11:13:56 +0000 (12:13 +0100)]
ima: Fix documentation-related warnings in ima_main.c
Fix the following warnings in ima_main.c, displayed with W=n make argument:
security/integrity/ima/ima_main.c:432: warning: Function parameter or
member 'vma' not described in 'ima_file_mprotect'
security/integrity/ima/ima_main.c:636: warning: Function parameter or
member 'inode' not described in 'ima_post_create_tmpfile'
security/integrity/ima/ima_main.c:636: warning: Excess function parameter
'file' description in 'ima_post_create_tmpfile'
security/integrity/ima/ima_main.c:843: warning: Function parameter or
member 'load_id' not described in 'ima_post_load_data'
security/integrity/ima/ima_main.c:843: warning: Excess function parameter
'id' description in 'ima_post_load_data'
Also, fix some style issues in the description of ima_post_create_tmpfile()
and ima_post_path_mknod().
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Reviewed-by: Mimi Zohar <zohar@linux.ibm.com>
Link: https://lore.kernel.org/bpf/20220302111404.193900-2-roberto.sassu@huawei.com
Chris J Arges [Wed, 9 Mar 2022 21:41:58 +0000 (15:41 -0600)]
bpftool: Ensure bytes_memlock json output is correct
If a BPF map is created over 2^32 the memlock value as displayed in JSON
format will be incorrect. Use atoll instead of atoi so that the correct
number is displayed.
```
$ bpftool map create /sys/fs/bpf/test_bpfmap type hash key 4 \
value 1024 entries 4194304 name test_bpfmap
$ bpftool map list
1: hash name test_bpfmap flags 0x0
key 4B value 1024B max_entries 4194304 memlock
4328521728B
$ sudo bpftool map list -j | jq .[].bytes_memlock
33554432
```
Signed-off-by: Chris J Arges <carges@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/b6601087-0b11-33cc-904a-1133d1500a10@cloudflare.com
Yuntao Wang [Thu, 10 Mar 2022 16:15:18 +0000 (00:15 +0800)]
bpf: Use offsetofend() to simplify macro definition
Use offsetofend() instead of offsetof() + sizeof() to simplify
MIN_BPF_LINEINFO_SIZE macro definition.
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Joanne Koong <joannelkoong@gmail.com>
Link: https://lore.kernel.org/bpf/20220310161518.534544-1-ytcoode@gmail.com
Hengqi Chen [Thu, 10 Mar 2022 15:53:35 +0000 (23:53 +0800)]
bpf: Fix comment for helper bpf_current_task_under_cgroup()
Fix the descriptions of the return values of helper bpf_current_task_under_cgroup().
Fixes:
c6b5fb8690fa ("bpf: add documentation for eBPF helpers (42-50)")
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220310155335.1278783-1-hengqi.chen@gmail.com
Daniel Borkmann [Thu, 10 Mar 2022 21:57:06 +0000 (22:57 +0100)]
Merge branch 'bpf-tstamp-follow-ups'
Martin KaFai Lau says:
====================
This set is a follow up on the bpf side based on discussion [0].
Patch 1 is to remove some skbuff macros that are used in bpf filter.c.
Patch 2 and 3 are to simplify the bpf insn rewrite on __sk_buff->tstamp.
Patch 4 is to simplify the bpf uapi by modeling the __sk_buff->tstamp
and __sk_buff->tstamp_type (was delivery_time_type) the same as its kernel
counter part skb->tstamp and skb->mono_delivery_time.
Patch 5 is to adjust the bpf selftests due to changes in patch 4.
[0]: https://lore.kernel.org/bpf/
419d994e-ff61-7c11-0ec7-
11fefcb0186e@iogearbox.net/
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Martin KaFai Lau [Wed, 9 Mar 2022 09:05:15 +0000 (01:05 -0800)]
bpf: selftests: Update tests after s/delivery_time/tstamp/ change in bpf.h
The previous patch made the follow changes:
- s/delivery_time_type/tstamp_type/
- s/bpf_skb_set_delivery_time/bpf_skb_set_tstamp/
- BPF_SKB_DELIVERY_TIME_* to BPF_SKB_TSTAMP_*
This patch is to change the test_tc_dtime.c to reflect the above.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220309090515.3712742-1-kafai@fb.com
Martin KaFai Lau [Wed, 9 Mar 2022 09:05:09 +0000 (01:05 -0800)]
bpf: Remove BPF_SKB_DELIVERY_TIME_NONE and rename s/delivery_time_/tstamp_/
This patch is to simplify the uapi bpf.h regarding to the tstamp type
and use a similar way as the kernel to describe the value stored
in __sk_buff->tstamp.
My earlier thought was to avoid describing the semantic and
clock base for the rcv timestamp until there is more clarity
on the use case, so the __sk_buff->delivery_time_type naming instead
of __sk_buff->tstamp_type.
With some thoughts, it can reuse the UNSPEC naming. This patch first
removes BPF_SKB_DELIVERY_TIME_NONE and also
rename BPF_SKB_DELIVERY_TIME_UNSPEC to BPF_SKB_TSTAMP_UNSPEC
and BPF_SKB_DELIVERY_TIME_MONO to BPF_SKB_TSTAMP_DELIVERY_MONO.
The semantic of BPF_SKB_TSTAMP_DELIVERY_MONO is the same:
__sk_buff->tstamp has delivery time in mono clock base.
BPF_SKB_TSTAMP_UNSPEC means __sk_buff->tstamp has the (rcv)
tstamp at ingress and the delivery time at egress. At egress,
the clock base could be found from skb->sk->sk_clockid.
__sk_buff->tstamp == 0 naturally means NONE, so NONE is not needed.
With BPF_SKB_TSTAMP_UNSPEC for the rcv tstamp at ingress,
the __sk_buff->delivery_time_type is also renamed to __sk_buff->tstamp_type
which was also suggested in the earlier discussion:
https://lore.kernel.org/bpf/
b181acbe-caf8-502d-4b7b-
7d96b9fc5d55@iogearbox.net/
The above will then make __sk_buff->tstamp and __sk_buff->tstamp_type
the same as its kernel skb->tstamp and skb->mono_delivery_time
counter part.
The internal kernel function bpf_skb_convert_dtime_type_read() is then
renamed to bpf_skb_convert_tstamp_type_read() and it can be simplified
with the BPF_SKB_DELIVERY_TIME_NONE gone. A BPF_ALU32_IMM(BPF_AND)
insn is also saved by using BPF_JMP32_IMM(BPF_JSET).
The bpf helper bpf_skb_set_delivery_time() is also renamed to
bpf_skb_set_tstamp(). The arg name is changed from dtime
to tstamp also. It only allows setting tstamp 0 for
BPF_SKB_TSTAMP_UNSPEC and it could be relaxed later
if there is use case to change mono delivery time to
non mono.
prog->delivery_time_access is also renamed to prog->tstamp_type_access.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220309090509.3712315-1-kafai@fb.com
Martin KaFai Lau [Wed, 9 Mar 2022 09:05:02 +0000 (01:05 -0800)]
bpf: Simplify insn rewrite on BPF_WRITE __sk_buff->tstamp
BPF_JMP32_IMM(BPF_JSET) is used to save a BPF_ALU32_IMM(BPF_AND).
The skb->tc_at_ingress and skb->mono_delivery_time are at the same
offset, so only one BPF_LDX_MEM(BPF_B) is needed.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220309090502.3711982-1-kafai@fb.com
Martin KaFai Lau [Wed, 9 Mar 2022 09:04:56 +0000 (01:04 -0800)]
bpf: Simplify insn rewrite on BPF_READ __sk_buff->tstamp
The skb->tc_at_ingress and skb->mono_delivery_time are at the same
byte offset. Thus, only one BPF_LDX_MEM(BPF_B) is needed
and both bits can be tested together.
/* BPF_READ: a = __sk_buff->tstamp */
if (skb->tc_at_ingress && skb->mono_delivery_time)
a = 0;
else
a = skb->tstamp;
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220309090456.3711530-1-kafai@fb.com
Martin KaFai Lau [Wed, 9 Mar 2022 09:04:50 +0000 (01:04 -0800)]
bpf: net: Remove TC_AT_INGRESS_OFFSET and SKB_MONO_DELIVERY_TIME_OFFSET macro
This patch removes the TC_AT_INGRESS_OFFSET and
SKB_MONO_DELIVERY_TIME_OFFSET macros. Instead, PKT_VLAN_PRESENT_OFFSET
is used because all of them are at the same offset. Comment is added to
make it clear that changing the position of tc_at_ingress or
mono_delivery_time will require to adjust the defined macros.
The earlier discussion can be found here:
https://lore.kernel.org/bpf/
419d994e-ff61-7c11-0ec7-
11fefcb0186e@iogearbox.net/
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220309090450.3710955-1-kafai@fb.com
Yihao Han [Thu, 10 Mar 2022 09:28:27 +0000 (01:28 -0800)]
bpf, test_run: Use kvfree() for memory allocated with kvmalloc()
It is allocated with kvmalloc(), the corresponding release function
should not be kfree(), use kvfree() instead.
Generated by: scripts/coccinelle/api/kfree_mismatch.cocci
Fixes:
b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN")
Signed-off-by: Yihao Han <hanyihao@vivo.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20220310092828.13405-1-hanyihao@vivo.com
Toke Høiland-Jørgensen [Thu, 10 Mar 2022 11:02:28 +0000 (12:02 +0100)]
bpf: Initialise retval in bpf_prog_test_run_xdp()
The kernel test robot pointed out that the newly added
bpf_test_run_xdp_live() runner doesn't set the retval in the caller (by
design), which means that the variable can be passed unitialised to
bpf_test_finish(). Fix this by initialising the variable properly.
Fixes:
b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220310110228.161869-1-toke@redhat.com
Niklas Söderlund [Thu, 10 Mar 2022 12:18:46 +0000 (13:18 +0100)]
bpftool: Restore support for BPF offload-enabled feature probing
Commit
1a56c18e6c2e4e74 ("bpftool: Stop supporting BPF offload-enabled
feature probing") removed the support to probe for BPF offload features.
This is still something that is useful for NFP NIC that can support
offloading of BPF programs.
The reason for the dropped support was that libbpf starting with v1.0
would drop support for passing the ifindex to the BPF prog/map/helper
feature probing APIs. In order to keep this useful feature for NFP
restore the functionality by moving it directly into bpftool.
The code restored is a simplified version of the code that existed in
libbpf which supposed passing the ifindex. The simplification is that it
only targets the cases where ifindex is given and call into libbpf for
the cases where it's not.
Before restoring support for probing offload features:
# bpftool feature probe dev ens4np0
Scanning system call availability...
bpf() syscall is available
Scanning eBPF program types...
Scanning eBPF map types...
Scanning eBPF helper functions...
eBPF helpers supported for program type sched_cls:
eBPF helpers supported for program type xdp:
Scanning miscellaneous eBPF features...
Large program size limit is NOT available
Bounded loop support is NOT available
ISA extension v2 is NOT available
ISA extension v3 is NOT available
With support for probing offload features restored:
# bpftool feature probe dev ens4np0
Scanning system call availability...
bpf() syscall is available
Scanning eBPF program types...
eBPF program_type sched_cls is available
eBPF program_type xdp is available
Scanning eBPF map types...
eBPF map_type hash is available
eBPF map_type array is available
Scanning eBPF helper functions...
eBPF helpers supported for program type sched_cls:
- bpf_map_lookup_elem
- bpf_get_prandom_u32
- bpf_perf_event_output
eBPF helpers supported for program type xdp:
- bpf_map_lookup_elem
- bpf_get_prandom_u32
- bpf_perf_event_output
- bpf_xdp_adjust_head
- bpf_xdp_adjust_tail
Scanning miscellaneous eBPF features...
Large program size limit is NOT available
Bounded loop support is NOT available
ISA extension v2 is NOT available
ISA extension v3 is NOT available
Signed-off-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Signed-off-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20220310121846.921256-1-niklas.soderlund@corigine.com
Alexei Starovoitov [Wed, 9 Mar 2022 22:19:23 +0000 (14:19 -0800)]
Merge branch 'Add support for transmitting packets using XDP in bpf_prog_run()'
Toke Høiland-Jørgensen says:
====================
This series adds support for transmitting packets using XDP in
bpf_prog_run(), by enabling a new mode "live packet" mode which will handle
the XDP program return codes and redirect the packets to the stack or other
devices.
The primary use case for this is testing the redirect map types and the
ndo_xdp_xmit driver operation without an external traffic generator. But it
turns out to also be useful for creating a programmable traffic generator
in XDP, as well as injecting frames into the stack. A sample traffic
generator, which was included in previous versions of the series, but now
moved to xdp-tools, transmits up to 9 Mpps/core on my test machine.
To transmit the frames, the new mode instantiates a page_pool structure in
bpf_prog_run() and initialises the pages to contain XDP frames with the
data passed in by userspace. These frames can then be handled as though
they came from the hardware XDP path, and the existing page_pool code takes
care of returning and recycling them. The setup is optimised for high
performance with a high number of repetitions to support stress testing and
the traffic generator use case; see patch 1 for details.
v11:
- Fix override of return code in xdp_test_run_batch()
- Add Martin's ACKs to remaining patches
v10:
- Only propagate memory allocation errors from xdp_test_run_batch()
- Get rid of BPF_F_TEST_XDP_RESERVED; batch_size can be used to probe
- Check that batch_size is unset in non-XDP test_run funcs
- Lower the number of repetitions in the selftest to 10k
- Count number of recycled pages in the selftest
- Fix a few other nits from Martin, carry forward ACKs
v9:
- XDP_DROP packets in the selftest to ensure pages are recycled
- Fix a few issues reported by the kernel test robot
- Rewrite the documentation of the batch size to make it a bit clearer
- Rebase to newest bpf-next
v8:
- Make the batch size configurable from userspace
- Don't interrupt the packet loop on errors in do_redirect (this can be
caught from the tracepoint)
- Add documentation of the feature
- Add reserved flag userspace can use to probe for support (kernel didn't
check flags previously)
- Rebase to newest bpf-next, disallow live mode for jumbo frames
v7:
- Extend the local_bh_disable() to cover the full test run loop, to prevent
running concurrently with the softirq. Fixes a deadlock with veth xmit.
- Reinstate the forwarding sysctl setting in the selftest, and bump up the
number of packets being transmitted to trigger the above bug.
- Update commit message to make it clear that user space can select the
ingress interface.
v6:
- Fix meta vs data pointer setting and add a selftest for it
- Add local_bh_disable() around code passing packets up the stack
- Create a new netns for the selftest and use a TC program instead of the
forwarding hack to count packets being XDP_PASS'ed from the test prog.
- Check for the correct ingress ifindex in the selftest
- Rebase and drop patches 1-5 that were already merged
v5:
- Rebase to current bpf-next
v4:
- Fix a few code style issues (Alexei)
- Also handle the other return codes: XDP_PASS builds skbs and injects them
into the stack, and XDP_TX is turned into a redirect out the same
interface (Alexei).
- Drop the last patch adding an xdp_trafficgen program to samples/bpf; this
will live in xdp-tools instead (Alexei).
- Add a separate bpf_test_run_xdp_live() function to test_run.c instead of
entangling the new mode in the existing bpf_test_run().
v3:
- Reorder patches to make sure they all build individually (Patchwork)
- Remove a couple of unused variables (Patchwork)
- Remove unlikely() annotation in slow path and add back John's ACK that I
accidentally dropped for v2 (John)
v2:
- Split up up __xdp_do_redirect to avoid passing two pointers to it (John)
- Always reset context pointers before each test run (John)
- Use get_mac_addr() from xdp_sample_user.h instead of rolling our own (Kumar)
- Fix wrong offset for metadata pointer
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Toke Høiland-Jørgensen [Wed, 9 Mar 2022 10:53:46 +0000 (11:53 +0100)]
selftests/bpf: Add selftest for XDP_REDIRECT in BPF_PROG_RUN
This adds a selftest for the XDP_REDIRECT facility in BPF_PROG_RUN, that
redirects packets into a veth and counts them using an XDP program on the
other side of the veth pair and a TC program on the local side of the veth.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220309105346.100053-6-toke@redhat.com
Toke Høiland-Jørgensen [Wed, 9 Mar 2022 10:53:45 +0000 (11:53 +0100)]
selftests/bpf: Move open_netns() and close_netns() into network_helpers.c
These will also be used by the xdp_do_redirect test being added in the next
commit.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220309105346.100053-5-toke@redhat.com
Toke Høiland-Jørgensen [Wed, 9 Mar 2022 10:53:44 +0000 (11:53 +0100)]
libbpf: Support batch_size option to bpf_prog_test_run
Add support for setting the new batch_size parameter to BPF_PROG_TEST_RUN
to libbpf; just add it as an option and pass it through to the kernel.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220309105346.100053-4-toke@redhat.com
Toke Høiland-Jørgensen [Wed, 9 Mar 2022 10:53:43 +0000 (11:53 +0100)]
Documentation/bpf: Add documentation for BPF_PROG_RUN
This adds documentation for the BPF_PROG_RUN command; a short overview of
the command itself, and a more verbose description of the "live packet"
mode for XDP introduced in the previous commit.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220309105346.100053-3-toke@redhat.com
Toke Høiland-Jørgensen [Wed, 9 Mar 2022 10:53:42 +0000 (11:53 +0100)]
bpf: Add "live packet" mode for XDP in BPF_PROG_RUN
This adds support for running XDP programs through BPF_PROG_RUN in a mode
that enables live packet processing of the resulting frames. Previous uses
of BPF_PROG_RUN for XDP returned the XDP program return code and the
modified packet data to userspace, which is useful for unit testing of XDP
programs.
The existing BPF_PROG_RUN for XDP allows userspace to set the ingress
ifindex and RXQ number as part of the context object being passed to the
kernel. This patch reuses that code, but adds a new mode with different
semantics, which can be selected with the new BPF_F_TEST_XDP_LIVE_FRAMES
flag.
When running BPF_PROG_RUN in this mode, the XDP program return codes will
be honoured: returning XDP_PASS will result in the frame being injected
into the networking stack as if it came from the selected networking
interface, while returning XDP_TX and XDP_REDIRECT will result in the frame
being transmitted out that interface. XDP_TX is translated into an
XDP_REDIRECT operation to the same interface, since the real XDP_TX action
is only possible from within the network drivers themselves, not from the
process context where BPF_PROG_RUN is executed.
Internally, this new mode of operation creates a page pool instance while
setting up the test run, and feeds pages from that into the XDP program.
The setup cost of this is amortised over the number of repetitions
specified by userspace.
To support the performance testing use case, we further optimise the setup
step so that all pages in the pool are pre-initialised with the packet
data, and pre-computed context and xdp_frame objects stored at the start of
each page. This makes it possible to entirely avoid touching the page
content on each XDP program invocation, and enables sending up to 9
Mpps/core on my test box.
Because the data pages are recycled by the page pool, and the test runner
doesn't re-initialise them for each run, subsequent invocations of the XDP
program will see the packet data in the state it was after the last time it
ran on that particular page. This means that an XDP program that modifies
the packet before redirecting it has to be careful about which assumptions
it makes about the packet content, but that is only an issue for the most
naively written programs.
Enabling the new flag is only allowed when not setting ctx_out and data_out
in the test specification, since using it means frames will be redirected
somewhere else, so they can't be returned.
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220309105346.100053-2-toke@redhat.com
Andrii Nakryiko [Wed, 9 Mar 2022 01:39:29 +0000 (17:39 -0800)]
Merge branch 'BPF test_progs tests improvement'
Mykola Lysenko says:
====================
First patch reduces the sample_freq to 1000 to ensure test will
work even when kernel.perf_event_max_sample_rate was reduced to 1000.
Patches for send_signal and find_vma tune the test implementation to
make sure needed thread is scheduled. Also, both tests will finish as
soon as possible after the test condition is met.
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Mykola Lysenko [Tue, 8 Mar 2022 20:04:49 +0000 (12:04 -0800)]
Improve stability of find_vma BPF test
Remove unneeded spleep and increase length of dummy CPU
intensive computation to guarantee test process execution.
Also, complete aforemention computation as soon as
test success criteria is met
Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220308200449.1757478-4-mykolal@fb.com
Mykola Lysenko [Tue, 8 Mar 2022 20:04:48 +0000 (12:04 -0800)]
Improve send_signal BPF test stability
Substitute sleep with dummy CPU intensive computation.
Finish aforemention computation as soon as signal was
delivered to the test process. Make the BPF code to
only execute when PID global variable is set
Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220308200449.1757478-3-mykolal@fb.com
Mykola Lysenko [Tue, 8 Mar 2022 20:04:47 +0000 (12:04 -0800)]
Improve perf related BPF tests (sample_freq issue)
Linux kernel may automatically reduce kernel.perf_event_max_sample_rate
value when running tests in parallel on slow systems. Linux kernel checks
against this limit when opening perf event with freq=1 parameter set.
The lower bound is 1000. This patch reduces sample_freq value to 1000
in all BPF tests that use sample_freq to ensure they always can open
perf event.
Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220308200449.1757478-2-mykolal@fb.com
Adrian Ratiu [Tue, 8 Mar 2022 12:14:28 +0000 (14:14 +0200)]
tools: Fix unavoidable GCC call in Clang builds
In ChromeOS and Gentoo we catch any unwanted mixed Clang/LLVM
and GCC/binutils usage via toolchain wrappers which fail builds.
This has revealed that GCC is called unconditionally in Clang
configured builds to populate GCC_TOOLCHAIN_DIR.
Allow the user to override CLANG_CROSS_FLAGS to avoid the GCC
call - in our case we set the var directly in the ebuild recipe.
In theory Clang could be able to autodetect these settings so
this logic could be removed entirely, but in practice as the
commit
cebdb7374577 ("tools: Help cross-building with clang")
mentions, this does not always work, so giving distributions
more control to specify their flags & sysroot is beneficial.
Suggested-by: Manoj Gupta <manojgupta@chromium.com>
Suggested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/lkml/87czjk4osi.fsf@ryzen9.i-did-not-set--mail-host-address--so-tickle-me
Link: https://lore.kernel.org/bpf/20220308121428.81735-1-adrian.ratiu@collabora.com
Felix Maurer [Thu, 3 Mar 2022 11:15:26 +0000 (12:15 +0100)]
selftests/bpf: Make test_lwt_ip_encap more stable and faster
In test_lwt_ip_encap, the ingress IPv6 encap test failed from time to
time. The failure occured when an IPv4 ping through the IPv6 GRE
encapsulation did not receive a reply within the timeout. The IPv4 ping
and the IPv6 ping in the test used different timeouts (1 sec for IPv4
and 6 sec for IPv6), probably taking into account that IPv6 might need
longer to successfully complete. However, when IPv4 pings (with the
short timeout) are encapsulated into the IPv6 tunnel, the delays of IPv6
apply.
The actual reason for the long delays with IPv6 was that the IPv6
neighbor discovery sometimes did not complete in time. This was caused
by the outgoing interface only having a tentative link local address,
i.e., not having completed DAD for that lladdr. The ND was successfully
retried after 1 sec but that was too late for the ping timeout.
The IPv6 addresses for the test were already added with nodad. However,
for the lladdrs, DAD was still performed. We now disable DAD in the test
netns completely and just assume that the two lladdrs on each veth pair
do not collide. This removes all the delays for IPv6 traffic in the
test.
Without the delays, we can now also reduce the delay of the IPv6 ping to
1 sec. This makes the whole test complete faster because we don't need
to wait for the excessive timeout for each IPv6 ping that is supposed
to fail.
Fixes:
0fde56e4385b0 ("selftests: bpf: add test_lwt_ip_encap selftest")
Signed-off-by: Felix Maurer <fmaurer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/4987d549d48b4e316cd5b3936de69c8d4bc75a4f.1646305899.git.fmaurer@redhat.com
Shung-Hsi Yu [Mon, 7 Mar 2022 05:29:18 +0000 (13:29 +0800)]
bpf: Determine buf_info inside check_buffer_access()
Instead of determining buf_info string in the caller of check_buffer_access(),
we can determine whether the register type is read-only through
type_is_rdonly_mem() helper inside check_buffer_access() and construct
buf_info, making the code slightly cleaner.
Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/YiWYLnAkEZXBP/gH@syu-laptop
KP Singh [Mon, 7 Mar 2022 13:30:48 +0000 (13:30 +0000)]
bpf/docs: Update list of architectures supported.
vmtest.sh also supports s390x now.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220307133048.1287644-2-kpsingh@kernel.org
KP Singh [Mon, 7 Mar 2022 13:30:47 +0000 (13:30 +0000)]
bpf/docs: Update vmtest docs for static linking
Dynamic linking when compiling on the host can cause issues when the
libc version does not match the one in the VM image. Update the
docs to explain how to do this.
Before:
./vmtest.sh -- ./test_progs -t test_ima
./test_progs: /usr/lib/libc.so.6: version `GLIBC_2.33' not found (required by ./test_progs)
After:
LDLIBS=-static ./vmtest.sh -- ./test_progs -t test_ima
test_ima:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Reported-by: "Geyslan G. Bem" <geyslan@gmail.com>
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220307133048.1287644-1-kpsingh@kernel.org
Yuntao Wang [Sat, 5 Mar 2022 16:10:13 +0000 (00:10 +0800)]
bpf: Remove redundant slash
The trailing slash of LIBBPF_SRCS is redundant, remove it. Also inline
it as its only used in LIBBPF_INCLUDE.
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220305161013.361646-1-ytcoode@gmail.com
Guo Zhengkui [Sun, 6 Mar 2022 02:34:26 +0000 (10:34 +0800)]
libbpf: Fix array_size.cocci warning
Fix the following coccicheck warning:
tools/lib/bpf/bpf.c:114:31-32: WARNING: Use ARRAY_SIZE
tools/lib/bpf/xsk.c:484:34-35: WARNING: Use ARRAY_SIZE
tools/lib/bpf/xsk.c:485:35-36: WARNING: Use ARRAY_SIZE
It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64.
Signed-off-by: Guo Zhengkui <guozhengkui@vivo.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220306023426.19324-1-guozhengkui@vivo.com
Yuntao Wang [Fri, 4 Mar 2022 07:04:08 +0000 (15:04 +0800)]
bpf: Replace strncpy() with strscpy()
Using strncpy() on NUL-terminated strings is considered deprecated[1].
Moreover, if the length of 'task->comm' is less than the destination buffer
size, strncpy() will NUL-pad the destination buffer, which is a needless
performance penalty.
Replacing strncpy() with strscpy() fixes all these issues.
[1] https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220304070408.233658-1-ytcoode@gmail.com
lic121 [Tue, 1 Mar 2022 13:26:23 +0000 (13:26 +0000)]
libbpf: Unmap rings when umem deleted
xsk_umem__create() does mmap for fill/comp rings, but xsk_umem__delete()
doesn't do the unmap. This works fine for regular cases, because
xsk_socket__delete() does unmap for the rings. But for the case that
xsk_socket__create_shared() fails, umem rings are not unmapped.
fill_save/comp_save are checked to determine if rings have already be
unmapped by xsk. If fill_save and comp_save are NULL, it means that the
rings have already been used by xsk. Then they are supposed to be
unmapped by xsk_socket__delete(). Otherwise, xsk_umem__delete() does the
unmap.
Fixes:
2f6324a3937f ("libbpf: Support shared umems between queues and devices")
Signed-off-by: Cheng Li <lic121@chinatelecom.cn>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220301132623.GA19995@vscode.7~
Alexei Starovoitov [Sun, 6 Mar 2022 02:38:15 +0000 (18:38 -0800)]
Merge branch 'bpf: add __percpu tagging in vmlinux BTF'
Hao Luo says:
====================
This patchset is very much similar to Yonghong's patchset on adding
__user tagging [1], where a "user" btf_type_tag was introduced to
describe __user memory pointers. Similar approach can be applied on
__percpu pointers. The __percpu attribute in kernel is used to identify
pointers that point to memory allocated in percpu region. Normally,
accessing __percpu memory requires using special functions like
per_cpu_ptr() etc. Directly accessing __percpu pointer is meaningless.
Currently vmlinux BTF does not have a way to differentiate a __percpu
pointer from a regular pointer. So BPF programs are allowed to load
__percpu memory directly, which is an incorrect behavior.
With the previous work that encodes __user information in BTF, a nice
framework has been set up to allow us to encode __percpu information in
BTF and let the verifier to reject programs that try to directly access
percpu pointer. Previously, there is a PTR_TO_PERCPU_BTF_ID reg type which
is used to represent those percpu static variables in the kernel. Pahole
is able to collect variables that are stored in ".data..percpu" section
in the kernel image and emit BTF information for those variables. The
bpf_per_cpu_ptr() and bpf_this_cpu_ptr() helper functions were added to
access these variables. Now with __percpu information, we can tag those
__percpu fields in a struct (such as cgroup->rstat_cpu) and allow the
pair of bpf percpu helpers to access them as well.
In addition to adding __percpu tagging, this patchset also fixes a
harmless bug in the previous patch that introduced __user. Patch 01/04
is for that. Patch 02/04 adds the new attribute "percpu". Patch 03/04
adds MEM_PERCPU tag for PTR_TO_BTF_ID and replaces PTR_TO_PERCPU_BTF_ID
with (BTF_ID | MEM_PERCPU). Patch 04/04 refactors the btf_tag test a bit
and adds tests for percpu tag.
Like [1], the minimal requirements for btf_type_tag is
clang (>= clang14) and pahole (>= 1.23).
[1] https://lore.kernel.org/bpf/
20211220015110.3rqxk5qwub3pa2gh@ast-mbp.dhcp.thefacebook.com/t/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hao Luo [Fri, 4 Mar 2022 19:16:57 +0000 (11:16 -0800)]
selftests/bpf: Add a test for btf_type_tag "percpu"
Add test for percpu btf_type_tag. Similar to the "user" tag, we test
the following cases:
1. __percpu struct field.
2. __percpu as function parameter.
3. per_cpu_ptr() accepts dynamically allocated __percpu memory.
Because the test for "user" and the test for "percpu" are very similar,
a little bit of refactoring has been done in btf_tag.c. Basically, both
tests share the same function for loading vmlinux and module btf.
Example output from log:
> ./test_progs -v -t btf_tag
libbpf: prog 'test_percpu1': BPF program load failed: Permission denied
libbpf: prog 'test_percpu1': -- BEGIN PROG LOAD LOG --
...
; g = arg->a;
1: (61) r1 = *(u32 *)(r1 +0)
R1 is ptr_bpf_testmod_btf_type_tag_1 access percpu memory: off=0
...
test_btf_type_tag_mod_percpu:PASS:btf_type_tag_percpu 0 nsec
#26/6 btf_tag/btf_type_tag_percpu_mod1:OK
libbpf: prog 'test_percpu2': BPF program load failed: Permission denied
libbpf: prog 'test_percpu2': -- BEGIN PROG LOAD LOG --
...
; g = arg->p->a;
2: (61) r1 = *(u32 *)(r1 +0)
R1 is ptr_bpf_testmod_btf_type_tag_1 access percpu memory: off=0
...
test_btf_type_tag_mod_percpu:PASS:btf_type_tag_percpu 0 nsec
#26/7 btf_tag/btf_type_tag_percpu_mod2:OK
libbpf: prog 'test_percpu_load': BPF program load failed: Permission denied
libbpf: prog 'test_percpu_load': -- BEGIN PROG LOAD LOG --
...
; g = (__u64)cgrp->rstat_cpu->updated_children;
2: (79) r1 = *(u64 *)(r1 +48)
R1 is ptr_cgroup_rstat_cpu access percpu memory: off=48
...
test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu_load 0 nsec
#26/8 btf_tag/btf_type_tag_percpu_vmlinux_load:OK
load_btfs:PASS:could not load vmlinux BTF 0 nsec
test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu 0 nsec
test_btf_type_tag_vmlinux_percpu:PASS:btf_type_tag_percpu_helper 0 nsec
#26/9 btf_tag/btf_type_tag_percpu_vmlinux_helper:OK
Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220304191657.981240-5-haoluo@google.com
Hao Luo [Fri, 4 Mar 2022 19:16:56 +0000 (11:16 -0800)]
bpf: Reject programs that try to load __percpu memory.
With the introduction of the btf_type_tag "percpu", we can add a
MEM_PERCPU to identify those pointers that point to percpu memory.
The ability of differetiating percpu pointers from regular memory
pointers have two benefits:
1. It forbids unexpected use of percpu pointers, such as direct loads.
In kernel, there are special functions used for accessing percpu
memory. Directly loading percpu memory is meaningless. We already
have BPF helpers like bpf_per_cpu_ptr() and bpf_this_cpu_ptr() that
wrap the kernel percpu functions. So we can now convert percpu
pointers into regular pointers in a safe way.
2. Previously, bpf_per_cpu_ptr() and bpf_this_cpu_ptr() only work on
PTR_TO_PERCPU_BTF_ID, a special reg_type which describes static
percpu variables in kernel (we rely on pahole to encode them into
vmlinux BTF). Now, since we can identify __percpu tagged pointers,
we can also identify dynamically allocated percpu memory as well.
It means we can use bpf_xxx_cpu_ptr() on dynamic percpu memory.
This would be very convenient when accessing fields like
"cgroup->rstat_cpu".
Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220304191657.981240-4-haoluo@google.com
Hao Luo [Fri, 4 Mar 2022 19:16:55 +0000 (11:16 -0800)]
compiler_types: Define __percpu as __attribute__((btf_type_tag("percpu")))
This is similar to commit
7472d5a642c9 ("compiler_types: define __user as
__attribute__((btf_type_tag("user")))"), where a type tag "user" was
introduced to identify the pointers that point to user memory. With that
change, the newest compile toolchain can encode __user information into
vmlinux BTF, which can be used by the BPF verifier to enforce safe
program behaviors.
Similarly, we have __percpu attribute, which is mainly used to indicate
memory is allocated in percpu region. The __percpu pointers in kernel
are supposed to be used together with functions like per_cpu_ptr() and
this_cpu_ptr(), which perform necessary calculation on the pointer's
base address. Without the btf_type_tag introduced in this patch,
__percpu pointers will be treated as regular memory pointers in vmlinux
BTF and BPF programs are allowed to directly dereference them, generating
incorrect behaviors. Now with "percpu" btf_type_tag, the BPF verifier is
able to differentiate __percpu pointers from regular pointers and forbids
unexpected behaviors like direct load.
The following is an example similar to the one given in commit
7472d5a642c9:
[$ ~] cat test.c
#define __percpu __attribute__((btf_type_tag("percpu")))
int foo(int __percpu *arg) {
return *arg;
}
[$ ~] clang -O2 -g -c test.c
[$ ~] pahole -JV test.o
...
File test.o:
[1] INT int size=4 nr_bits=32 encoding=SIGNED
[2] TYPE_TAG percpu type_id=1
[3] PTR (anon) type_id=2
[4] FUNC_PROTO (anon) return=1 args=(3 arg)
[5] FUNC foo type_id=4
[$ ~]
for the function argument "int __percpu *arg", its type is described as
PTR -> TYPE_TAG(percpu) -> INT
The kernel can use this information for bpf verification or other
use cases.
Like commit
7472d5a642c9, this feature requires clang (>= clang14) and
pahole (>= 1.23).
Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220304191657.981240-3-haoluo@google.com