Ilya Leoshkevich [Wed, 10 Feb 2021 20:45:02 +0000 (21:45 +0100)]
bpf: Fix subreg optimization for BPF_FETCH
All 32-bit variants of BPF_FETCH (add, and, or, xor, xchg, cmpxchg)
define a 32-bit subreg and thus have zext_dst set. Their encoding,
however, uses dst_reg field as a base register, which causes
opt_subreg_zext_lo32_rnd_hi32() to zero-extend said base register
instead of the one the insn really defines (r0 or src_reg).
Fix by properly choosing a register being defined, similar to how
check_atomic() already does that.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210210204502.83429-1-iii@linux.ibm.com
Ilya Leoshkevich [Wed, 10 Feb 2021 14:28:52 +0000 (15:28 +0100)]
docs: bpf: Clarify BPF_CMPXCHG wording
Based on [1], BPF_CMPXCHG should always load the old value into R0. The
phrasing in bpf.rst is somewhat ambiguous in this regard, improve it to
make this aspect crystal clear.
[1] https://lore.kernel.org/bpf/CAADnVQJFcFwxEz=wnV=hkie-EDwa8s5JGbBQeFt1TGux1OihJw@mail.gmail.com/
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210210142853.82203-1-iii@linux.ibm.com
Alexei Starovoitov [Fri, 12 Feb 2021 03:35:00 +0000 (19:35 -0800)]
bpf: Clear per_cpu pointers during bpf_prog_realloc
bpf_prog_realloc copies contents of struct bpf_prog.
The pointers have to be cleared before freeing old struct.
Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
Fixes:
700d4796ef59 ("bpf: Optimize program stats")
Fixes:
ca06f55b9002 ("bpf: Add per-program recursion prevention mechanism")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Florent Revest [Wed, 10 Feb 2021 11:14:06 +0000 (12:14 +0100)]
selftests/bpf: Add a selftest for the tracing bpf_get_socket_cookie
This builds up on the existing socket cookie test which checks whether
the bpf_get_socket_cookie helpers provide the same value in
cgroup/connect6 and sockops programs for a socket created by the
userspace part of the test.
Instead of having an update_cookie sockops program tag a socket local
storage with 0xFF, this uses both an update_cookie_sockops program and
an update_cookie_tracing program which succesively tag the socket with
0x0F and then 0xF0.
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-5-revest@chromium.org
Florent Revest [Wed, 10 Feb 2021 11:14:05 +0000 (12:14 +0100)]
selftests/bpf: Use vmlinux.h in socket_cookie_prog.c
When migrating from the bpf.h's to the vmlinux.h's definition of struct
bps_sock, an interesting LLVM behavior happened. LLVM started producing
two fetches of ctx->sk in the sockops program this means that the
verifier could not keep track of the NULL-check on ctx->sk. Therefore,
we need to extract ctx->sk in a variable before checking and
dereferencing it.
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-4-revest@chromium.org
Florent Revest [Wed, 10 Feb 2021 11:14:04 +0000 (12:14 +0100)]
selftests/bpf: Integrate the socket_cookie test to test_progs
Currently, the selftest for the BPF socket_cookie helpers is built and
run independently from test_progs. It's easy to forget and hard to
maintain.
This patch moves the socket cookies test into prog_tests/ and vastly
simplifies its logic by:
- rewriting the loading code with BPF skeletons
- rewriting the server/client code with network helpers
- rewriting the cgroup code with test__join_cgroup
- rewriting the error handling code with CHECKs
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-3-revest@chromium.org
Florent Revest [Wed, 10 Feb 2021 11:14:03 +0000 (12:14 +0100)]
bpf: Expose bpf_get_socket_cookie to tracing programs
This needs a new helper that:
- can work in a sleepable context (using sock_gen_cookie)
- takes a struct sock pointer and checks that it's not NULL
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-2-revest@chromium.org
Florent Revest [Wed, 10 Feb 2021 11:14:02 +0000 (12:14 +0100)]
bpf: Be less specific about socket cookies guarantees
Since "
92acdc58ab11 bpf, net: Rework cookie generator as per-cpu one"
socket cookies are not guaranteed to be non-decreasing. The
bpf_get_socket_cookie helper descriptions are currently specifying that
cookies are non-decreasing but we don't want users to rely on that.
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-1-revest@chromium.org
Jiri Olsa [Thu, 11 Feb 2021 12:40:04 +0000 (13:40 +0100)]
kbuild: Do not clean resolve_btfids if the output does not exist
Nathan reported issue with cleaning empty build directory:
$ make -s O=build distclean
../../scripts/Makefile.include:4: *** \
O=/ho...build/tools/bpf/resolve_btfids does not exist. Stop.
The problem that tools scripts require existing output
directory, otherwise it fails.
Adding check around the resolve_btfids clean target to
ensure the output directory is in place.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/bpf/20210211124004.1144344-1-jolsa@kernel.org
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:34 +0000 (19:36 -0800)]
selftests/bpf: Add a test for map-in-map and per-cpu maps in sleepable progs
Add a basic test for map-in-map and per-cpu maps in sleepable programs.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-10-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:33 +0000 (19:36 -0800)]
bpf: Allows per-cpu maps and map-in-map in sleepable programs
Since sleepable programs are now executing under migrate_disable
the per-cpu maps are safe to use.
The map-in-map were ok to use in sleepable from the time sleepable
progs were introduced.
Note that non-preallocated maps are still not safe, since there is
no rcu_read_lock yet in sleepable programs and dynamically allocated
map elements are relying on rcu protection. The sleepable programs
have rcu_read_lock_trace instead. That limitation will be addresses
in the future.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-9-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:32 +0000 (19:36 -0800)]
selftests/bpf: Improve recursion selftest
Since recursion_misses counter is available in bpf_prog_info
improve the selftest to make sure it's counting correctly.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210210033634.62081-8-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:31 +0000 (19:36 -0800)]
bpf: Count the number of times recursion was prevented
Add per-program counter for number of times recursion prevention mechanism
was triggered and expose it via show_fdinfo and bpf_prog_info.
Teach bpftool to print it.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-7-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:30 +0000 (19:36 -0800)]
selftest/bpf: Add a recursion test
Add recursive non-sleepable fentry program as a test.
All attach points where sleepable progs can execute are non recursive so far.
The recursion protection mechanism for sleepable cannot be activated yet.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-6-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:29 +0000 (19:36 -0800)]
bpf: Add per-program recursion prevention mechanism
Since both sleepable and non-sleepable programs execute under migrate_disable
add recursion prevention mechanism to both types of programs when they're
executed via bpf trampoline.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-5-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:28 +0000 (19:36 -0800)]
bpf: Compute program stats for sleepable programs
Since sleepable programs don't migrate from the cpu the excution stats can be
computed for them as well. Reuse the same infrastructure for both sleepable and
non-sleepable programs.
run_cnt -> the number of times the program was executed.
run_time_ns -> the program execution time in nanoseconds including the
off-cpu time when the program was sleeping.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-4-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:27 +0000 (19:36 -0800)]
bpf: Run sleepable programs with migration disabled
In older non-RT kernels migrate_disable() was the same as preempt_disable().
Since commit
74d862b682f5 ("sched: Make migrate_disable/enable() independent of RT")
migrate_disable() is real and doesn't prevent sleeping.
Running sleepable programs with migration disabled allows to add support for
program stats and per-cpu maps later.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-3-alexei.starovoitov@gmail.com
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:26 +0000 (19:36 -0800)]
bpf: Optimize program stats
Move bpf_prog_stats from prog->aux into prog to avoid one extra load
in critical path of program execution.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-2-alexei.starovoitov@gmail.com
Marco Elver [Tue, 9 Feb 2021 11:27:01 +0000 (12:27 +0100)]
bpf_lru_list: Read double-checked variable once without lock
For double-checked locking in bpf_common_lru_push_free(), node->type is
read outside the critical section and then re-checked under the lock.
However, concurrent writes to node->type result in data races.
For example, the following concurrent access was observed by KCSAN:
write to 0xffff88801521bc22 of 1 bytes by task 10038 on cpu 1:
__bpf_lru_node_move_in kernel/bpf/bpf_lru_list.c:91
__local_list_flush kernel/bpf/bpf_lru_list.c:298
...
read to 0xffff88801521bc22 of 1 bytes by task 10043 on cpu 0:
bpf_common_lru_push_free kernel/bpf/bpf_lru_list.c:507
bpf_lru_push_free kernel/bpf/bpf_lru_list.c:555
...
Fix the data races where node->type is read outside the critical section
(for double-checked locking) by marking the access with READ_ONCE() as
well as ensuring the variable is only accessed once.
Fixes:
3a08c2fd7634 ("bpf: LRU List")
Reported-by: syzbot+3536db46dfa58c573458@syzkaller.appspotmail.com
Reported-by: syzbot+516acdb03d3e27d91bcd@syzkaller.appspotmail.com
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210209112701.3341724-1-elver@google.com
Jiapeng Chong [Tue, 9 Feb 2021 08:46:38 +0000 (16:46 +0800)]
selftests/bpf: Simplify the calculation of variables
Fix the following coccicheck warnings:
./tools/testing/selftests/bpf/xdpxceiver.c:954:28-30: WARNING !A || A &&
B is equivalent to !A || B.
./tools/testing/selftests/bpf/xdpxceiver.c:932:28-30: WARNING !A || A &&
B is equivalent to !A || B.
./tools/testing/selftests/bpf/xdpxceiver.c:909:28-30: WARNING !A || A &&
B is equivalent to !A || B.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612860398-102839-1-git-send-email-jiapeng.chong@linux.alibaba.com
Ilya Leoshkevich [Wed, 10 Feb 2021 02:07:13 +0000 (03:07 +0100)]
selftests/bpf: Fix endianness issues in atomic tests
Atomic tests store a DW, but then load it back as a W from the same
address. This doesn't work on big-endian systems, and since the point
of those tests is not testing narrow loads, fix simply by loading a
DW.
Fixes:
98d666d05a1d ("bpf: Add tests for new BPF atomic operations")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210020713.77911-1-iii@linux.ibm.com
Alexei Starovoitov [Wed, 10 Feb 2021 18:44:19 +0000 (10:44 -0800)]
Merge branch 'allow variable-offset stack acces'
Andrei Matei says:
====================
Before this patch, variable offset access to the stack was dissalowed
for regular instructions, but was allowed for "indirect" accesses (i.e.
helpers). This patch removes the restriction, allowing reading and
writing to the stack through stack pointers with variable offsets. This
makes stack-allocated buffers more usable in programs, and brings stack
pointers closer to other types of pointers.
The motivation is being able to use stack-allocated buffers for data
manipulation. When the stack size limit is sufficient, allocating
buffers on the stack is simpler than per-cpu arrays, or other
alternatives.
V2 -> V3
- var-offset writes mark all the stack slots in range as initialized, so
that future reads are not rejected.
- rewrote the C test to not use uprobes, as per Andrii's suggestion.
- addressed other review comments from Alexei.
V1 -> V2
- add support for var-offset stack writes, in addition to reads
- add a C test
- made variable offset direct reads no longer destroy spilled registers
in the access range
- address review nits
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrei Matei [Sun, 7 Feb 2021 01:10:27 +0000 (20:10 -0500)]
selftest/bpf: Add test for var-offset stack access
Add a higher-level test (C BPF program) for the new functionality -
variable access stack reads and writes.
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-5-andreimatei1@gmail.com
Andrei Matei [Sun, 7 Feb 2021 01:10:26 +0000 (20:10 -0500)]
selftest/bpf: Verifier tests for var-off access
Add tests for the new functionality - reading and writing to the stack
through a variable-offset pointer.
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-4-andreimatei1@gmail.com
Andrei Matei [Sun, 7 Feb 2021 01:10:25 +0000 (20:10 -0500)]
selftest/bpf: Adjust expected verifier errors
The verifier errors around stack accesses have changed slightly in the
previous commit (generally for the better).
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-3-andreimatei1@gmail.com
Andrei Matei [Sun, 7 Feb 2021 01:10:24 +0000 (20:10 -0500)]
bpf: Allow variable-offset stack access
Before this patch, variable offset access to the stack was dissalowed
for regular instructions, but was allowed for "indirect" accesses (i.e.
helpers). This patch removes the restriction, allowing reading and
writing to the stack through stack pointers with variable offsets. This
makes stack-allocated buffers more usable in programs, and brings stack
pointers closer to other types of pointers.
The motivation is being able to use stack-allocated buffers for data
manipulation. When the stack size limit is sufficient, allocating
buffers on the stack is simpler than per-cpu arrays, or other
alternatives.
In unpriviledged programs, variable-offset reads and writes are
disallowed (they were already disallowed for the indirect access case)
because the speculative execution checking code doesn't support them.
Additionally, when writing through a variable-offset stack pointer, if
any pointers are in the accessible range, there's possilibities of later
leaking pointers because the write cannot be tracked precisely.
Writes with variable offset mark the whole range as initialized, even
though we don't know which stack slots are actually written. This is in
order to not reject future reads to these slots. Note that this doesn't
affect writes done through helpers; like before, helpers need the whole
stack range to be initialized to begin with.
All the stack slots are in range are considered scalars after the write;
variable-offset register spills are not tracked.
For reads, all the stack slots in the variable range needs to be
initialized (but see above about what writes do), otherwise the read is
rejected. All register spilled in stack slots that might be read are
marked as having been read, however reads through such pointers don't do
register filling; the target register will always be either a scalar or
a constant zero.
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-2-andreimatei1@gmail.com
Andrii Nakryiko [Tue, 9 Feb 2021 05:21:39 +0000 (21:21 -0800)]
Merge branch 'kbuild/resolve_btfids: Invoke resolve_btfids'
Jiri Olsa says:
====================
hi,
resolve_btfids tool is used during the kernel build,
so we should clean it on kernel's make clean.
v2 changes:
- add Song's acks on patches 1 and 4 (others changed) [Song]
- add missing / [Andrii]
- change srctree variable initialization [Andrii]
- shifted ifdef for clean target [Andrii]
thanks,
jirka
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Jiri Olsa [Fri, 5 Feb 2021 12:40:20 +0000 (13:40 +0100)]
kbuild: Add resolve_btfids clean to root clean target
The resolve_btfids tool is used during the kernel build,
so we should clean it on kernel's make clean.
Invoking the the resolve_btfids clean as part of root
'make clean'.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210205124020.683286-5-jolsa@kernel.org
Jiri Olsa [Fri, 5 Feb 2021 12:40:19 +0000 (13:40 +0100)]
tools/resolve_btfids: Set srctree variable unconditionally
We want this clean to be called from tree's root Makefile,
which defines same srctree variable and that will screw
the make setup.
We actually do not use srctree being passed from outside,
so we can solve this by setting current srctree value
directly.
Also changing the way how srctree is initialized as suggested
by Andrri.
Also root Makefile does not define the implicit RM variable,
so adding RM initialization.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-4-jolsa@kernel.org
Jiri Olsa [Fri, 5 Feb 2021 12:40:18 +0000 (13:40 +0100)]
tools/resolve_btfids: Check objects before removing
We want this clean to be called from tree's root clean
and that one is silent if there's nothing to clean.
Adding check for all object to clean and display CLEAN
messages only if there are objects to remove.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-3-jolsa@kernel.org
Jiri Olsa [Fri, 5 Feb 2021 12:40:17 +0000 (13:40 +0100)]
tools/resolve_btfids: Build libbpf and libsubcmd in separate directories
Setting up separate build directories for libbpf and libpsubcmd,
so it's separated from other objects and we don't get them mixed
in the future.
It also simplifies cleaning, which is now simple rm -rf.
Also there's no need for FEATURE-DUMP.libbpf and bpf_helper_defs.h
files in .gitignore anymore.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-2-jolsa@kernel.org
Jiapeng Chong [Mon, 8 Feb 2021 09:43:36 +0000 (17:43 +0800)]
bpf: Simplify bool comparison
Fix the following coccicheck warning:
./tools/bpf/bpf_dbg.c:893:32-36: WARNING: Comparison to bool.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612777416-34339-1-git-send-email-jiapeng.chong@linux.alibaba.com
Brendan Jackman [Mon, 8 Feb 2021 12:37:37 +0000 (12:37 +0000)]
selftests/bpf: Add missing cleanup in atomic_bounds test
Add missing skeleton destroy call.
Fixes:
37086bfdc737 ("bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH")
Reported-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210208123737.963172-1-jackmanb@google.com
Yang Li [Mon, 8 Feb 2021 10:30:13 +0000 (18:30 +0800)]
selftests/bpf: Remove unneeded semicolon
Eliminate the following coccicheck warning:
./tools/testing/selftests/bpf/test_flow_dissector.c:506:2-3: Unneeded
semicolon
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612780213-84583-1-git-send-email-yang.lee@linux.alibaba.com
Yang Li [Sun, 7 Feb 2021 07:52:40 +0000 (15:52 +0800)]
bpf/benchs/bench_ringbufs: Remove unneeded semicolon
Eliminate the following coccicheck warning:
./tools/testing/selftests/bpf/benchs/bench_ringbufs.c:322:2-3: Unneeded
semicolon
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612684360-115910-1-git-send-email-yang.lee@linux.alibaba.com
Yonghong Song [Thu, 4 Feb 2021 23:48:27 +0000 (15:48 -0800)]
bpf: Refactor BPF_PSEUDO_CALL checking as a helper function
There is no functionality change. This refactoring intends
to facilitate next patch change with BPF_PSEUDO_FUNC.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210204234827.1628953-1-yhs@fb.com
Andrii Nakryiko [Fri, 5 Feb 2021 00:33:56 +0000 (16:33 -0800)]
Merge branch 'BPF ring buffer + sleepable programs'
KP Singh says:
====================
- Use ring_buffer__consume without BPF_RB_FORCE_WAKEUP as suggested by
Andrii
- Use ASSERT_OK_PTR macro
Sleepable programs currently do not have access to any ringbuffer and
since the perf ring buffer is a per-cpu map, it would not be trivial to
enable for sleepable programs. Our specific use-case is to use the
bpf_ima_inode_hash helper and write the hash to a ring buffer from a
sleepable LSM hook.
This series allows the BPF ringbuffer to be used in sleepable programs
(tracing and lsm). Since the helper prototypes were already exposed
the only change required was have the verifier allow
BPF_MAP_TYPE_RINGBUF for sleepable programs. The ima test is also
modified to use the ringbuffer instead of global variables.
Based on dicussions we had over the BPF office hours and enabling all
the possible debug options, I could not find any issues or warnings when
using the ring buffer from sleepable programs.
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
KP Singh [Thu, 4 Feb 2021 19:36:22 +0000 (19:36 +0000)]
bpf/selftests: Update the IMA test to use BPF ring buffer
Instead of using shared global variables between userspace and BPF, use
the ring buffer to send the IMA hash on the BPF ring buffer. This helps
in validating both IMA and the usage of the ringbuffer in sleepable
programs.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204193622.3367275-3-kpsingh@kernel.org
KP Singh [Thu, 4 Feb 2021 19:36:21 +0000 (19:36 +0000)]
bpf: Allow usage of BPF ringbuffer in sleepable programs
The BPF ringbuffer map is pre-allocated and the implementation logic
does not rely on disabling preemption or per-cpu data structures. Using
the BPF ringbuffer sleepable LSM and tracing programs does not trigger
any warnings with DEBUG_ATOMIC_SLEEP, DEBUG_PREEMPT,
PROVE_RCU and PROVE_LOCKING and LOCKDEP enabled.
This allows helpers like bpf_copy_from_user and bpf_ima_inode_hash to
write to the BPF ring buffer from sleepable BPF programs.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204193622.3367275-2-kpsingh@kernel.org
Andrii Nakryiko [Fri, 5 Feb 2021 00:03:17 +0000 (16:03 -0800)]
Merge branch 'BPF selftest helper script'
KP Singh says:
====================
# v4 -> v5
- Use %Y (modification time) instead of %W (creation time) of the local
copy of the kernel config to check for newer upstream config.
- Rename the script to vmtest.sh
# v3 -> v4
- Fix logic for updating kernel config to not download the file
if there are no upstream modifications and avoid extraneous
kernel compilation as suggested by Andrii.
- This also removes the need for the -k flag.
# v2 -> v3
- Fixes to silence verbose commands
- Fixed output buffering without being teed out
- Fixed the clobbered error code of the script
- Other fixes suggested by Andrii
# v1 -> v2
- The script now compiles the kernel by default, and the -k option
implies "keep the kernel"
- Pointer to the script in the docs.
- Some minor simplifications.
Allow developers and contributors to understand if their changes would
end up breaking the BPF CI and avoid the back and forth required for
fixing the test cases in the CI environment. The se
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
KP Singh [Thu, 4 Feb 2021 19:45:44 +0000 (19:45 +0000)]
bpf/selftests: Add a short note about vmtest.sh in README.rst
Add a short note to make contributors aware of the existence of the
script. The documentation does not intentionally document all the
options of the script to avoid mentioning it in two places (it's
available in the usage / help message of the script).
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-3-kpsingh@kernel.org
KP Singh [Thu, 4 Feb 2021 19:45:43 +0000 (19:45 +0000)]
bpf: Helper script for running BPF presubmit tests
The script runs the BPF selftests locally on the same kernel image
as they would run post submit in the BPF continuous integration
framework.
The goal of the script is to allow contributors to run selftests locally
in the same environment to check if their changes would end up breaking
the BPF CI and reduce the back-and-forth between the maintainers and the
developers.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-2-kpsingh@kernel.org
Alexei Starovoitov [Tue, 2 Feb 2021 05:38:37 +0000 (21:38 -0800)]
bpf: Emit explicit NULL pointer checks for PROBE_LDX instructions.
PTR_TO_BTF_ID registers contain either kernel pointer or NULL.
Emit the NULL check explicitly by JIT instead of going into
do_user_addr_fault() on NULL deference.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210202053837.95909-1-alexei.starovoitov@gmail.com
Andrii Nakryiko [Wed, 3 Feb 2021 20:34:45 +0000 (12:34 -0800)]
libbpf: Stop using feature-detection Makefiles
Libbpf's Makefile relies on Linux tools infrastructure's feature detection
framework, but libbpf's needs are very modest: it detects the presence of
libelf and libz, both of which are mandatory. So it doesn't benefit much from
the framework, but pays significant costs in terms of maintainability and
debugging experience, when something goes wrong. The other feature detector,
testing for the presernce of minimal BPF API in system headers is long
obsolete as well, providing no value.
So stop using feature detection and just assume the presence of libelf and
libz during build time. Worst case, user will get a clear and actionable
linker error, e.g.:
/usr/bin/ld: cannot find -lelf
On the other hand, we completely bypass recurring issues various users
reported over time with false negatives of feature detection (libelf or libz
not being detected, while they are actually present in the system).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Link: https://lore.kernel.org/bpf/20210203203445.3356114-1-andrii@kernel.org
Lorenzo Bianconi [Fri, 29 Jan 2021 22:04:08 +0000 (23:04 +0100)]
net, veth: Alloc skb in bulk for ndo_xdp_xmit
Split ndo_xdp_xmit and ndo_start_xmit use cases in veth_xdp_rcv routine
in order to alloc skbs in bulk for XDP_PASS verdict.
Introduce xdp_alloc_skb_bulk utility routine to alloc skb bulk list.
The proposed approach has been tested in the following scenario:
eth (ixgbe) --> XDP_REDIRECT --> veth0 --> (remote-ns) veth1 --> XDP_PASS
XDP_REDIRECT: xdp_redirect_map bpf sample
XDP_PASS: xdp_rxq_info bpf sample
traffic generator: pkt_gen sending udp traffic on a remote device
bpf-next master: ~3.64Mpps
bpf-next + skb bulking allocation: ~3.79Mpps
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/a14a30d3c06fff24e13f836c733d80efc0bd6eb5.1611957532.git.lorenzo@kernel.org
Andrei Matei [Sat, 30 Jan 2021 22:01:50 +0000 (17:01 -0500)]
selftest/bpf: Testing for multiple logs on REJECT
This patch adds support to verifier tests to check for a succession of
verifier log messages on program load failure. This makes the errstr
field work uniformly across REJECT and VERBOSE_ACCEPT checks.
This patch also increases the maximum size of a message in the series of
messages to test from 80 chars to 200 chars. This is in order to keep
existing tests working, which sometimes test for messages larger than 80
chars (which was accepted in the REJECT case, when testing for a single
message, but not in the VERBOSE_ACCEPT case, when testing for possibly
multiple messages).
And example of such a long, checked message is in bounds.c: "R1 has
unknown scalar with mixed signed bounds, pointer arithmetic with it
prohibited for !root"
Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210130220150.59305-1-andreimatei1@gmail.com
Yang Li [Wed, 3 Feb 2021 03:17:28 +0000 (11:17 +0800)]
samples: bpf: Remove unneeded semicolon
Eliminate the following coccicheck warning:
./samples/bpf/cookie_uid_helper_example.c:316:3-4: Unneeded semicolon
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612322248-35398-1-git-send-email-yang.lee@linux.alibaba.com
KP Singh [Tue, 2 Feb 2021 21:37:30 +0000 (21:37 +0000)]
selftests/bpf: Fix a compiler warning in local_storage test
Some compilers trigger a warning when tmp_dir_path is allocated
with a fixed size of 64-bytes and used in the following snprintf:
snprintf(tmp_exec_path, sizeof(tmp_exec_path), "%s/copy_of_rm",
tmp_dir_path);
warning: ‘/copy_of_rm’ directive output may be truncated writing 11
bytes into a region of size between 1 and 64 [-Wformat-truncation=]
This is because it assumes that tmp_dir_path can be a maximum of 64
bytes long and, therefore, the end-result can get truncated. Fix it by
not using a fixed size in the initialization of tmp_dir_path which
allows the compiler to track actual size of the array better.
Fixes:
2f94ac191846 ("bpf: Update local storage test to check handling of null ptrs")
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210202213730.1906931-1-kpsingh@kernel.org
Brendan Jackman [Tue, 2 Feb 2021 13:50:02 +0000 (13:50 +0000)]
bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH
When BPF_FETCH is set, atomic instructions load a value from memory
into a register. The current verifier code first checks via
check_mem_access whether we can access the memory, and then checks
via check_reg_arg whether we can write into the register.
For loads, check_reg_arg has the side-effect of marking the
register's value as unkonwn, and check_mem_access has the side effect
of propagating bounds from memory to the register. This currently only
takes effect for stack memory.
Therefore with the current order, bounds information is thrown away,
but by simply reversing the order of check_reg_arg
vs. check_mem_access, we can instead propagate bounds smartly.
A simple test is added with an infinite loop that can only be proved
unreachable if this propagation is present. This is implemented both
with C and directly in test_verifier using assembly.
Suggested-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210202135002.4024825-1-jackmanb@google.com
Tiezhu Yang [Tue, 26 Jan 2021 14:05:25 +0000 (22:05 +0800)]
samples/bpf: Add include dir for MIPS Loongson64 to fix build errors
There exists many build errors when make M=samples/bpf on the Loongson
platform. This issue is MIPS related, x86 compiles just fine.
Here are some errors:
CLANG-bpf samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:19:
In file included from ./arch/mips/include/asm/barrier.h:11:
./arch/mips/include/asm/addrspace.h:13:10: fatal error: 'spaces.h' file not found
^~~~~~~~~~
1 error generated.
CLANG-bpf samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:22:
In file included from ./arch/mips/include/asm/cpu-features.h:13:
In file included from ./arch/mips/include/asm/cpu-info.h:15:
In file included from ./include/linux/cache.h:6:
./arch/mips/include/asm/cache.h:12:10: fatal error: 'kmalloc.h' file not found
^~~~~~~~~~~
1 error generated.
CLANG-bpf samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:22:
./arch/mips/include/asm/cpu-features.h:15:10: fatal error: 'cpu-feature-overrides.h' file not found
^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
$ find arch/mips/include/asm -name spaces.h | sort
arch/mips/include/asm/mach-ar7/spaces.h
...
arch/mips/include/asm/mach-generic/spaces.h
...
arch/mips/include/asm/mach-loongson64/spaces.h
...
arch/mips/include/asm/mach-tx49xx/spaces.h
$ find arch/mips/include/asm -name kmalloc.h | sort
arch/mips/include/asm/mach-generic/kmalloc.h
arch/mips/include/asm/mach-ip32/kmalloc.h
arch/mips/include/asm/mach-tx49xx/kmalloc.h
$ find arch/mips/include/asm -name cpu-feature-overrides.h | sort
arch/mips/include/asm/mach-ath25/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-generic/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-tx49xx/cpu-feature-overrides.h
In the arch/mips/Makefile, there exists the following board-dependent
options:
include arch/mips/Kbuild.platforms
cflags-y += -I$(srctree)/arch/mips/include/asm/mach-generic
So we can do the similar things in samples/bpf/Makefile, just add
platform specific and generic include dir for MIPS Loongson64 to
fix the build errors.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1611669925-25315-1-git-send-email-yangtiezhu@loongson.cn
Tobias Klauser [Wed, 27 Jan 2021 17:46:15 +0000 (18:46 +0100)]
bpf: Simplify cases in bpf_base_func_proto
!perfmon_capable() is checked before the last switch(func_id) in
bpf_base_func_proto. Thus, the cases BPF_FUNC_trace_printk and
BPF_FUNC_snprintf_btf can be moved to that last switch(func_id) to omit
the inline !perfmon_capable() checks.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127174615.3038-1-tklauser@distanz.ch
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:53 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_RECVMSG
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on a locked socket.
Note that we could remove the switch for prog->expected_attach_type altogether
since all current sock_addr attach types are covered. However, it makes sense
to keep it as a safe-guard in case new sock_addr attach types are added that
might not operate on a locked socket. Therefore, avoid to let this slip through.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-5-sdf@google.com
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:52 +0000 (15:28 -0800)]
selftests/bpf: Rewrite recvmsg{4,6} asm progs to c in test_sock_addr
I'll extend them in the next patch. It's easier to work with C
than with asm.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-4-sdf@google.com
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:51 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_INET{4,6}_GET{PEER,SOCK}NAME
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on
a locked socket.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-3-sdf@google.com
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:50 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_SENDMSG
Can be used to query/modify socket state for unconnected UDP sendmsg.
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on
a locked socket.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-2-sdf@google.com
Sedat Dilek [Thu, 28 Jan 2021 01:50:58 +0000 (02:50 +0100)]
tools: Factor Clang, LLC and LLVM utils definitions
When dealing with BPF/BTF/pahole and DWARF v5 I wanted to build bpftool.
While looking into the source code I found duplicate assignments in misc tools
for the LLVM eco system, e.g. clang and llvm-objcopy.
Move the Clang, LLC and/or LLVM utils definitions to tools/scripts/Makefile.include
file and add missing includes where needed. Honestly, I was inspired by the commit
c8a950d0d3b9 ("tools: Factor HOSTCC, HOSTLD, HOSTAR definitions").
I tested with bpftool and perf on Debian/testing AMD64 and LLVM/Clang v11.1.0-rc1.
Build instructions:
[ make and make-options ]
MAKE="make V=1"
MAKE_OPTS="HOSTCC=clang HOSTCXX=clang++ HOSTLD=ld.lld CC=clang LD=ld.lld LLVM=1 LLVM_IAS=1"
MAKE_OPTS="$MAKE_OPTS PAHOLE=/opt/pahole/bin/pahole"
[ clean-up ]
$MAKE $MAKE_OPTS -C tools/ clean
[ bpftool ]
$MAKE $MAKE_OPTS -C tools/bpf/bpftool/
[ perf ]
PYTHON=python3 $MAKE $MAKE_OPTS -C tools/perf/
I was careful with respecting the user's wish to override custom compiler, linker,
GNU/binutils and/or LLVM utils settings.
Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com> # tools/build and tools/perf
Link: https://lore.kernel.org/bpf/20210128015117.20515-1-sedat.dilek@gmail.com
Stanislav Fomichev [Wed, 27 Jan 2021 19:31:40 +0000 (11:31 -0800)]
selftests/bpf: Verify that rebinding to port < 1024 from BPF works
Return 3 to indicate that permission check for port 111
should be skipped.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210127193140.3170382-2-sdf@google.com
Stanislav Fomichev [Wed, 27 Jan 2021 19:31:39 +0000 (11:31 -0800)]
bpf: Allow rewriting to ports under ip_unprivileged_port_start
At the moment, BPF_CGROUP_INET{4,6}_BIND hooks can rewrite user_port
to the privileged ones (< ip_unprivileged_port_start), but it will
be rejected later on in the __inet_bind or __inet6_bind.
Let's add another return value to indicate that CAP_NET_BIND_SERVICE
check should be ignored. Use the same idea as we currently use
in cgroup/egress where bit #1 indicates CN. Instead, for
cgroup/bind{4,6}, bit #1 indicates that CAP_NET_BIND_SERVICE should
be bypassed.
v5:
- rename flags to be less confusing (Andrey Ignatov)
- rework BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY to work on flags
and accept BPF_RET_SET_CN (no behavioral changes)
v4:
- Add missing IPv6 support (Martin KaFai Lau)
v3:
- Update description (Martin KaFai Lau)
- Fix capability restore in selftest (Martin KaFai Lau)
v2:
- Switch to explicit return code (Martin KaFai Lau)
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Andrey Ignatov <rdna@fb.com>
Link: https://lore.kernel.org/bpf/20210127193140.3170382-1-sdf@google.com
Cong Wang [Wed, 27 Jan 2021 22:15:01 +0000 (14:15 -0800)]
skmsg: Make sk_psock_destroy() static
sk_psock_destroy() is a RCU callback, I can't see any reason why
it could be used outside.
Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Cc: Lorenz Bauer <lmb@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210127221501.46866-1-xiyou.wangcong@gmail.com
Menglong Dong [Wed, 27 Jan 2021 02:25:07 +0000 (18:25 -0800)]
bpf: Change 'BPF_ADD' to 'BPF_AND' in print_bpf_insn()
This 'BPF_ADD' is duplicated, and I belive it should be 'BPF_AND'.
Fixes:
981f94c3e921 ("bpf: Add bitwise atomic instructions")
Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Brendan Jackman <jackmanb@google.com>
Link: https://lore.kernel.org/bpf/20210127022507.23674-1-dong.menglong@zte.com.cn
Andrii Nakryiko [Tue, 26 Jan 2021 06:50:18 +0000 (22:50 -0800)]
selftests/bpf: Don't exit on failed bpf_testmod unload
Fix bug in handling bpf_testmod unloading that will cause test_progs exiting
prematurely if bpf_testmod unloading failed. This is especially problematic
when running a subset of test_progs that doesn't require root permissions and
doesn't rely on bpf_testmod, yet will fail immediately due to exit(1) in
unload_bpf_testmod().
Fixes:
9f7fa225894c ("selftests/bpf: Add bpf_testmod kernel module for testing")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210126065019.1268027-1-andrii@kernel.org
Tiezhu Yang [Mon, 25 Jan 2021 05:05:46 +0000 (13:05 +0800)]
samples/bpf: Set flag __SANE_USERSPACE_TYPES__ for MIPS to fix build warnings
There exists many build warnings when make M=samples/bpf on the Loongson
platform, this issue is MIPS related, x86 compiles just fine.
Here are some warnings:
CC samples/bpf/ibumad_user.o
samples/bpf/ibumad_user.c: In function ‘dump_counts’:
samples/bpf/ibumad_user.c:46:24: warning: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
printf("0x%02x : %llu\n", key, value);
~~~^ ~~~~~
%lu
CC samples/bpf/offwaketime_user.o
samples/bpf/offwaketime_user.c: In function ‘print_ksym’:
samples/bpf/offwaketime_user.c:34:17: warning: format ‘%llx’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
printf("%s/%llx;", sym->name, addr);
~~~^ ~~~~
%lx
samples/bpf/offwaketime_user.c: In function ‘print_stack’:
samples/bpf/offwaketime_user.c:68:17: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
printf(";%s %lld\n", key->waker, count);
~~~^ ~~~~~
%ld
MIPS needs __SANE_USERSPACE_TYPES__ before <linux/types.h> to select
'int-ll64.h' in arch/mips/include/uapi/asm/types.h, then it can avoid
build warnings when printing __u64 with %llu, %llx or %lld.
The header tools/include/linux/types.h defines __SANE_USERSPACE_TYPES__,
it seems that we can include <linux/types.h> in the source files which
have build warnings, but it has no effect due to actually it includes
usr/include/linux/types.h instead of tools/include/linux/types.h, the
problem is that "usr/include" is preferred first than "tools/include"
in samples/bpf/Makefile, that sounds like a ugly hack to -Itools/include
before -Iusr/include.
So define __SANE_USERSPACE_TYPES__ for MIPS in samples/bpf/Makefile
is proper, if add "TPROGS_CFLAGS += -D__SANE_USERSPACE_TYPES__" in
samples/bpf/Makefile, it appears the following error:
Auto-detecting system features:
... libelf: [ on ]
... zlib: [ on ]
... bpf: [ OFF ]
BPF API too old
make[3]: *** [Makefile:293: bpfdep] Error 1
make[2]: *** [Makefile:156: all] Error 2
With #ifndef __SANE_USERSPACE_TYPES__ in tools/include/linux/types.h,
the above error has gone and this ifndef change does not hurt other
compilations.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/1611551146-14052-1-git-send-email-yangtiezhu@loongson.cn
Florian Lehner [Sat, 23 Jan 2021 18:52:21 +0000 (19:52 +0100)]
tools, headers: Sync struct bpf_perf_event_data
Update struct bpf_perf_event_data with the addr field to match the
tools headers with the kernel headers.
Signed-off-by: Florian Lehner <dev@der-flo.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210123185221.23946-1-dev@der-flo.net
Björn Töpel [Fri, 22 Jan 2021 15:47:25 +0000 (16:47 +0100)]
selftests/bpf: Avoid useless void *-casts
There is no need to cast to void * when the argument is void *. Avoid
cluttering of code.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-13-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:24 +0000 (16:47 +0100)]
selftests/bpf: Consistent malloc/calloc usage
Use calloc instead of malloc where it makes sense, and avoid C++-style
void *-cast.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-12-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:23 +0000 (16:47 +0100)]
selftests/bpf: Avoid heap allocation
The data variable is only used locally. Instead of using the heap,
stick to using the stack.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-11-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:22 +0000 (16:47 +0100)]
selftests/bpf: Define local variables at the beginning of a block
Use C89 rules for variable definition.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-10-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:21 +0000 (16:47 +0100)]
selftests/bpf: Change type from void * to struct generic_data *
Instead of casting from void *, let us use the actual type in
gen_udp_hdr().
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-9-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:20 +0000 (16:47 +0100)]
selftests/bpf: Change type from void * to struct ifaceconfigobj *
Instead of casting from void *, let us use the actual type in
init_iface_config().
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-8-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:19 +0000 (16:47 +0100)]
selftests/bpf: Remove casting by introduce local variable
Let us use a local variable in nsswitchthread(), so we can remove a
lot of casting for better readability.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-7-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:18 +0000 (16:47 +0100)]
selftests/bpf: Improve readability of xdpxceiver/worker_pkt_validate()
Introduce a local variable to get rid of lot of casting. Move common
code outside the if/else-clause.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-6-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:17 +0000 (16:47 +0100)]
selftests/bpf: Remove memory leak
The allocated entry is immediately overwritten by an assignment. Fix
that.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-5-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:16 +0000 (16:47 +0100)]
selftests/bpf: Fix style warnings
Silence three checkpatch style warnings.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-4-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:15 +0000 (16:47 +0100)]
selftests/bpf: Remove unused enums
The enums undef and bidi are not used. Remove them.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-3-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 15:47:14 +0000 (16:47 +0100)]
selftests/bpf: Remove a lot of ifobject casting
Instead of passing void * all over the place, let us pass the actual
type (ifobject) and remove the void-ptr-to-type-ptr casting.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-2-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 10:53:51 +0000 (11:53 +0100)]
libbpf, xsk: Select AF_XDP BPF program based on kernel version
Add detection for kernel version, and adapt the BPF program based on
kernel support. This way, users will get the best possible performance
from the BPF program.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Marek Majtyka <alardam@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-4-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 10:53:50 +0000 (11:53 +0100)]
xsk: Fold xp_assign_dev and __xp_assign_dev
Fold xp_assign_dev and __xp_assign_dev. The former directly calls the
latter.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-3-bjorn.topel@gmail.com
Björn Töpel [Fri, 22 Jan 2021 10:53:49 +0000 (11:53 +0100)]
xsk: Remove explicit_free parameter from __xsk_rcv()
The explicit_free parameter of the __xsk_rcv() function was used to
mark whether the call was via the generic XDP or the native XDP
path. Instead of clutter the code with if-statements and "true/false"
parameters which are hard to understand, simply move the explicit free
to the __xsk_map_redirect() which is always called from the native XDP
path.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-2-bjorn.topel@gmail.com
Hangbin Liu [Fri, 22 Jan 2021 02:50:07 +0000 (10:50 +0800)]
samples/bpf: Add xdp program on egress for xdp_redirect_map
This patch add a xdp program on egress to show that we can modify
the packet on egress. In this sample we will set the pkt's src
mac to egress's mac address. The xdp_prog will be attached when
-X option supplied.
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20210122025007.2968381-1-liuhangbin@gmail.com
Tobias Klauser [Thu, 21 Jan 2021 17:43:24 +0000 (18:43 +0100)]
bpf: Fix typo in scalar{,32}_min_max_rsh comments
s/bounts/bounds/
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210121174324.24127-1-tklauser@distanz.ch
Tiezhu Yang [Fri, 22 Jan 2021 01:39:44 +0000 (09:39 +0800)]
bpf, docs: Update build procedure for manually compiling LLVM and Clang
The current LLVM and Clang build procedure in samples/bpf/README.rst is
out of date. See below that the links are not accessible any more.
$ git clone http://llvm.org/git/llvm.git
Cloning into 'llvm'...
fatal: unable to access 'http://llvm.org/git/llvm.git/': Maximum (20) redirects followed
$ git clone --depth 1 http://llvm.org/git/clang.git
Cloning into 'clang'...
fatal: unable to access 'http://llvm.org/git/clang.git/': Maximum (20) redirects followed
The LLVM community has adopted new ways to build the compiler. There are
different ways to build LLVM and Clang, the Clang Getting Started page [1]
has one way. As Yonghong said, it is better to copy the build procedure
in Documentation/bpf/bpf_devel_QA.rst to keep consistent.
I verified the procedure and it is proved to be feasible, so we should
update README.rst to reflect the reality. At the same time, update the
related comment in Makefile.
Additionally, as Fangrui said, the dir llvm-project/llvm/build/install is
not used, BUILD_SHARED_LIBS=OFF is the default option [2], so also change
Documentation/bpf/bpf_devel_QA.rst together.
At last, we recommend that developers who want the fastest incremental
builds use the Ninja build system [1], you can find it in your system's
package manager, usually the package is ninja or ninja-build [3], so add
ninja to build dependencies suggested by Nathan.
[1] https://clang.llvm.org/get_started.html
[2] https://www.llvm.org/docs/CMake.html
[3] https://github.com/ninja-build/ninja/wiki/Pre-built-Ninja-packages
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Cc: Fangrui Song <maskray@google.com>
Link: https://lore.kernel.org/bpf/1611279584-26047-1-git-send-email-yangtiezhu@loongson.cn
Junlin Yang [Thu, 21 Jan 2021 12:23:09 +0000 (20:23 +0800)]
selftest/bpf: Fix typo
Change 'exeeds' to 'exceeds'.
Signed-off-by: Junlin Yang <yangjunlin@yulong.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210121122309.1501-1-angkery@163.com
Jiri Olsa [Thu, 21 Jan 2021 20:22:03 +0000 (21:22 +0100)]
libbpf: Use string table index from index table if needed
For very large ELF objects (with many sections), we could
get special value SHN_XINDEX (65535) for elf object's string
table index - e_shstrndx.
Call elf_getshdrstrndx to get the proper string table index,
instead of reading it directly from ELF header.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210121202203.9346-4-jolsa@kernel.org
Brendan Jackman [Wed, 20 Jan 2021 13:39:46 +0000 (13:39 +0000)]
docs: bpf: Clarify -mcpu=v3 requirement for atomic ops
Alexei pointed out [1] that this wording is pretty confusing. Here's
an attempt to be more explicit and clear.
[1] https://lore.kernel.org/bpf/CAADnVQJVvwoZsE1K+6qRxzF7+6CvZNzygnoBW9tZNWJELk5c=Q@mail.gmail.com/
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210120133946.2107897-3-jackmanb@google.com
Brendan Jackman [Wed, 20 Jan 2021 13:39:45 +0000 (13:39 +0000)]
docs: bpf: Fixup atomics markup
This fixes up the markup to fix a warning, be more consistent with
use of monospace, and use the correct .rst syntax for <em> (* instead
of _).
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Link: https://lore.kernel.org/bpf/20210120133946.2107897-2-jackmanb@google.com
Alexei Starovoitov [Wed, 20 Jan 2021 22:23:00 +0000 (14:23 -0800)]
Merge branch 'bpf: misc performance improvements for cgroup'
Stanislav Fomichev says:
====================
First patch adds custom getsockopt for TCP_ZEROCOPY_RECEIVE
to remove kmalloc and lock_sock overhead from the dat path.
Second patch removes kzalloc/kfree from getsockopt for the common cases.
Third patch switches cgroup_bpf_enabled to be per-attach to
to add only overhead for the cgroup attach types used on the system.
No visible user-side changes.
v9:
- include linux/tcp.h instead of netinet/tcp.h in sockopt_sk.c
- note that v9 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v8:
- add bpi.h to tools/include/uapi in the same patch (Martin KaFai Lau)
- kmalloc instead of kzalloc when exporting buffer (Martin KaFai Lau)
- note that v8 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v7:
- add comment about buffer contents for retval != 0 (Martin KaFai Lau)
- export tcp.h into tools/include/uapi (Martin KaFai Lau)
- note that v7 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v6:
- avoid indirect cost for new bpf_bypass_getsockopt (Eric Dumazet)
v5:
- reorder patches to reduce the churn (Martin KaFai Lau)
v4:
- update performance numbers
- bypass_bpf_getsockopt (Martin KaFai Lau)
v3:
- remove extra newline, add comment about sizeof tcp_zerocopy_receive
(Martin KaFai Lau)
- add another patch to remove lock_sock overhead from
TCP_ZEROCOPY_RECEIVE; technically, this makes patch #1 obsolete,
but I'd still prefer to keep it to help with other socket
options
v2:
- perf numbers for getsockopt kmalloc reduction (Song Liu)
- (sk) in BPF_CGROUP_PRE_CONNECT_ENABLED (Song Liu)
- 128 -> 64 buffer size, BUILD_BUG_ON (Martin KaFai Lau)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Stanislav Fomichev [Fri, 15 Jan 2021 16:35:01 +0000 (08:35 -0800)]
bpf: Split cgroup_bpf_enabled per attach type
When we attach any cgroup hook, the rest (even if unused/unattached) start
to contribute small overhead. In particular, the one we want to avoid is
__cgroup_bpf_run_filter_skb which does two redirections to get to
the cgroup and pushes/pulls skb.
Let's split cgroup_bpf_enabled to be per-attach to make sure
only used attach types trigger.
I've dropped some existing high-level cgroup_bpf_enabled in some
places because BPF_PROG_CGROUP_XXX_RUN macros usually have another
cgroup_bpf_enabled check.
I also had to copy-paste BPF_CGROUP_RUN_SA_PROG_LOCK for
GETPEERNAME/GETSOCKNAME because type for cgroup_bpf_enabled[type]
has to be constant and known at compile time.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210115163501.805133-4-sdf@google.com
Stanislav Fomichev [Fri, 15 Jan 2021 16:35:00 +0000 (08:35 -0800)]
bpf: Try to avoid kzalloc in cgroup/{s,g}etsockopt
When we attach a bpf program to cgroup/getsockopt any other getsockopt()
syscall starts incurring kzalloc/kfree cost.
Let add a small buffer on the stack and use it for small (majority)
{s,g}etsockopt values. The buffer is small enough to fit into
the cache line and cover the majority of simple options (most
of them are 4 byte ints).
It seems natural to do the same for setsockopt, but it's a bit more
involved when the BPF program modifies the data (where we have to
kmalloc). The assumption is that for the majority of setsockopt
calls (which are doing pure BPF options or apply policy) this
will bring some benefit as well.
Without this patch (we remove about 1% __kmalloc):
3.38% 0.07% tcp_mmap [kernel.kallsyms] [k] __cgroup_bpf_run_filter_getsockopt
|
--3.30%--__cgroup_bpf_run_filter_getsockopt
|
--0.81%--__kmalloc
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210115163501.805133-3-sdf@google.com
Stanislav Fomichev [Fri, 15 Jan 2021 16:34:59 +0000 (08:34 -0800)]
bpf: Remove extra lock_sock for TCP_ZEROCOPY_RECEIVE
Add custom implementation of getsockopt hook for TCP_ZEROCOPY_RECEIVE.
We skip generic hooks for TCP_ZEROCOPY_RECEIVE and have a custom
call in do_tcp_getsockopt using the on-stack data. This removes
3% overhead for locking/unlocking the socket.
Without this patch:
3.38% 0.07% tcp_mmap [kernel.kallsyms] [k] __cgroup_bpf_run_filter_getsockopt
|
--3.30%--__cgroup_bpf_run_filter_getsockopt
|
--0.81%--__kmalloc
With the patch applied:
0.52% 0.12% tcp_mmap [kernel.kallsyms] [k] __cgroup_bpf_run_filter_getsockopt_kern
Note, exporting uapi/tcp.h requires removing netinet/tcp.h
from test_progs.h because those headers have confliciting
definitions.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210115163501.805133-2-sdf@google.com
Yonghong Song [Tue, 19 Jan 2021 15:35:18 +0000 (07:35 -0800)]
bpf: Permit size-0 datasec
llvm patch https://reviews.llvm.org/D84002 permitted
to emit empty rodata datasec if the elf .rodata section
contains read-only data from local variables. These
local variables will be not emitted as BTF_KIND_VARs
since llvm converted these local variables as
static variables with private linkage without debuginfo
types. Such an empty rodata datasec will make
skeleton code generation easy since for skeleton
a rodata struct will be generated if there is a
.rodata elf section. The existence of a rodata
btf datasec is also consistent with the existence
of a rodata map created by libbpf.
The btf with such an empty rodata datasec will fail
in the kernel though as kernel will reject a datasec
with zero vlen and zero size. For example, for the below code,
int sys_enter(void *ctx)
{
int fmt[6] = {1, 2, 3, 4, 5, 6};
int dst[6];
bpf_probe_read(dst, sizeof(dst), fmt);
return 0;
}
We got the below btf (bpftool btf dump ./test.o):
[1] PTR '(anon)' type_id=0
[2] FUNC_PROTO '(anon)' ret_type_id=3 vlen=1
'ctx' type_id=1
[3] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
[4] FUNC 'sys_enter' type_id=2 linkage=global
[5] INT 'char' size=1 bits_offset=0 nr_bits=8 encoding=SIGNED
[6] ARRAY '(anon)' type_id=5 index_type_id=7 nr_elems=4
[7] INT '__ARRAY_SIZE_TYPE__' size=4 bits_offset=0 nr_bits=32 encoding=(none)
[8] VAR '_license' type_id=6, linkage=global-alloc
[9] DATASEC '.rodata' size=0 vlen=0
[10] DATASEC 'license' size=0 vlen=1
type_id=8 offset=0 size=4
When loading the ./test.o to the kernel with bpftool,
we see the following error:
libbpf: Error loading BTF: Invalid argument(22)
libbpf: magic: 0xeb9f
...
[6] ARRAY (anon) type_id=5 index_type_id=7 nr_elems=4
[7] INT __ARRAY_SIZE_TYPE__ size=4 bits_offset=0 nr_bits=32 encoding=(none)
[8] VAR _license type_id=6 linkage=1
[9] DATASEC .rodata size=24 vlen=0 vlen == 0
libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
Basically, libbpf changed .rodata datasec size to 24 since elf .rodata
section size is 24. The kernel then rejected the BTF since vlen = 0.
Note that the above kernel verifier failure can be worked around with
changing local variable "fmt" to a static or global, optionally const, variable.
This patch permits a datasec with vlen = 0 in kernel.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119153519.3901963-1-yhs@fb.com
Alexei Starovoitov [Tue, 19 Jan 2021 21:00:50 +0000 (13:00 -0800)]
Merge branch 'Allow attaching to bare tracepoints'
Qais Yousef says:
====================
Changes in v3:
* Fix not returning error value correctly in
trigger_module_test_write() (Yonghong)
* Add Yonghong acked-by to patch 1.
Changes in v2:
* Fix compilation error. (Andrii)
* Make the new test use write() instead of read() (Andrii)
Add some missing glue logic to teach bpf about bare tracepoints - tracepoints
without any trace event associated with them.
Bare tracepoints are declare with DECLARE_TRACE(). Full tracepoints are declare
with TRACE_EVENT().
BPF can attach to these tracepoints as RAW_TRACEPOINT() only as there're no
events in tracefs created with them.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Qais Yousef [Tue, 19 Jan 2021 12:22:37 +0000 (12:22 +0000)]
selftests: bpf: Add a new test for bare tracepoints
Reuse module_attach infrastructure to add a new bare tracepoint to check
we can attach to it as a raw tracepoint.
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210119122237.2426878-3-qais.yousef@arm.com
Alexei Starovoitov [Tue, 19 Jan 2021 20:42:13 +0000 (12:42 -0800)]
Merge branch 'bpf,x64: implement jump padding in jit'
Gary Lin says:
====================
This patch series implements jump padding to x64 jit to cover some
corner cases that used to consume more than 20 jit passes and caused
failure.
v4:
- Add the detailed comments about the possible padding bytes
- Add the second test case which triggers jmp_cond padding and imm32 nop
jmp padding.
- Add the new test case as another subprog
v3:
- Copy the instructions of prologue separately or the size calculation
of the first BPF instruction would include the prologue.
- Replace WARN_ONCE() with pr_err() and EFAULT
- Use MAX_PASSES in the for loop condition check
- Remove the "padded" flag from x64_jit_data. For the extra pass of
subprogs, padding is always enabled since it won't hurt the images
that converge without padding.
v2:
- Simplify the sample code in the commit description and provide the
jit code
- Check the expected padding bytes with WARN_ONCE
- Move the 'padded' flag to 'struct x64_jit_data'
- Remove the EXPECTED_FAIL flag from bpf_fill_maxinsns11() in test_bpf
- Add 2 verifier tests
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Qais Yousef [Tue, 19 Jan 2021 12:22:36 +0000 (12:22 +0000)]
trace: bpf: Allow bpf to attach to bare tracepoints
Some subsystems only have bare tracepoints (a tracepoint with no
associated trace event) to avoid the problem of trace events being an
ABI that can't be changed.
>From bpf presepective, bare tracepoints are what it calls
RAW_TRACEPOINT().
Since bpf assumed there's 1:1 mapping, it relied on hooking to
DEFINE_EVENT() macro to create bpf mapping of the tracepoints. Since
bare tracepoints use DECLARE_TRACE() to create the tracepoint, bpf had
no knowledge about their existence.
By teaching bpf_probe.h to parse DECLARE_TRACE() in a similar fashion to
DEFINE_EVENT(), bpf can find and attach to the new raw tracepoints.
Enabling that comes with the contract that changes to raw tracepoints
don't constitute a regression if they break existing bpf programs.
We need the ability to continue to morph and modify these raw
tracepoints without worrying about any ABI.
Update Documentation/bpf/bpf_design_QA.rst to document this contract.
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210119122237.2426878-2-qais.yousef@arm.com
Gary Lin [Tue, 19 Jan 2021 10:25:01 +0000 (18:25 +0800)]
selftests/bpf: Add verifier tests for x64 jit jump padding
There are 3 tests added into verifier's jit tests to trigger x64
jit jump padding.
The first test can be represented as the following assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+128
3: if r0 == 2 goto pc+128
...
129: if r0 == 128 goto pc+128
130: goto pc+128
131: goto pc+127
...
256: goto pc+2
257: goto pc+1
258: r0 = 1
259: ret
We first store a random number to r0 and add the corresponding
conditional jumps (2~129) to make verifier believe that those jump
instructions from 130 to 257 are reachable. When the program is sent to
x64 jit, it starts to optimize out the NOP jumps backwards from 257.
Since there are 128 such jumps, the program easily reaches 15 passes and
triggers jump padding.
Here is the x64 jit code of the first test:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 4c 90 75 e3 call 0xffffffffe375905c
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 fe 04 00 00 je 0x518
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 f9 04 00 00 je 0x51d
...
f6: 48 83 f8 18 cmp rax,0x18
fa: 0f 84 8b 04 00 00 je 0x58b
100: 48 83 f8 19 cmp rax,0x19
104: 0f 84 86 04 00 00 je 0x590
10a: 48 83 f8 1a cmp rax,0x1a
10e: 0f 84 81 04 00 00 je 0x595
...
500: 0f 84 83 01 00 00 je 0x689
506: 48 81 f8 80 00 00 00 cmp rax,0x80
50d: 0f 84 76 01 00 00 je 0x689
513: e9 71 01 00 00 jmp 0x689
518: e9 6c 01 00 00 jmp 0x689
...
5fe: e9 86 00 00 00 jmp 0x689
603: e9 81 00 00 00 jmp 0x689
608: 0f 1f 00 nop DWORD PTR [rax]
60b: eb 7c jmp 0x689
60d: eb 7a jmp 0x689
...
683: eb 04 jmp 0x689
685: eb 02 jmp 0x689
687: 66 90 xchg ax,ax
689: b8 01 00 00 00 mov eax,0x1
68e: c9 leave
68f: c3 ret
As expected, a 3 bytes NOPs is inserted at 608 due to the transition
from imm32 jmp to imm8 jmp. A 2 bytes NOPs is also inserted at 687 to
replace a NOP jump.
The second test case is tricky. Here is the assembly code:
1: bpf_call bpf_get_prandom_u32
2: if r0 == 1 goto pc+2048
3: if r0 == 2 goto pc+2048
...
2049: if r0 == 2048 goto pc+2048
2050: goto pc+2048
2051: goto pc+16
2052: goto pc+15
...
2064: goto pc+3
2065: goto pc+2
2066: goto pc+1
...
[repeat "goto pc+16".."goto pc+1" 127 times]
...
4099: r0 = 2
4100: ret
There are 4 major parts of the program.
1) 1~2049: Those are instructions to make 2050~4098 reachable. Some of
them also could generate the padding for jmp_cond.
2) 2050: This is the target instruction for the imm32 nop jmp padding.
3) 2051~4098: The repeated "goto 1~16" instructions are designed to be
consumed by the nop jmp optimization. In the end, those
instrucitons become 128 continuous 0 offset jmp and are
optimized out in 1 pass, and this make insn 2050 an imm32
nop jmp in the next pass, so that we can trigger the
5 bytes padding.
4) 4099~4100: Those are the instructions to end the program.
The x64 jit code is like this:
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 66 90 xchg ax,ax
7: 55 push rbp
8: 48 89 e5 mov rbp,rsp
b: e8 bc 7b d5 d3 call 0xffffffffd3d57bcc
10: 48 83 f8 01 cmp rax,0x1
14: 0f 84 7e 66 00 00 je 0x6698
1a: 48 83 f8 02 cmp rax,0x2
1e: 0f 84 74 66 00 00 je 0x6698
24: 48 83 f8 03 cmp rax,0x3
28: 0f 84 6a 66 00 00 je 0x6698
2e: 48 83 f8 04 cmp rax,0x4
32: 0f 84 60 66 00 00 je 0x6698
38: 48 83 f8 05 cmp rax,0x5
3c: 0f 84 56 66 00 00 je 0x6698
42: 48 83 f8 06 cmp rax,0x6
46: 0f 84 4c 66 00 00 je 0x6698
...
666c: 48 81 f8 fe 07 00 00 cmp rax,0x7fe
6673: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6677: 74 1f je 0x6698
6679: 48 81 f8 ff 07 00 00 cmp rax,0x7ff
6680: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6684: 74 12 je 0x6698
6686: 48 81 f8 00 08 00 00 cmp rax,0x800
668d: 0f 1f 40 00 nop DWORD PTR [rax+0x0]
6691: 74 05 je 0x6698
6693: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
6698: b8 02 00 00 00 mov eax,0x2
669d: c9 leave
669e: c3 ret
Since insn 2051~4098 are optimized out right before the padding pass,
there are several conditional jumps from the first part are replaced with
imm8 jmp_cond, and this triggers the 4 bytes padding, for example at
6673, 6680, and 668d. On the other hand, Insn 2050 is replaced with the
5 bytes nops at 6693.
The third test is to invoke the first and second tests as subprogs to test
bpf2bpf. Per the system log, there was one more jit happened with only
one pass and the same jit code was produced.
v4:
- Add the second test case which triggers jmp_cond padding and imm32 nop
jmp padding.
- Add the new test case as another subprog
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119102501.511-4-glin@suse.com
Gary Lin [Tue, 19 Jan 2021 10:25:00 +0000 (18:25 +0800)]
test_bpf: Remove EXPECTED_FAIL flag from bpf_fill_maxinsns11
With NOPs padding, x64 jit now can handle the jump cases like
bpf_fill_maxinsns11().
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119102501.511-3-glin@suse.com
Gary Lin [Tue, 19 Jan 2021 10:24:59 +0000 (18:24 +0800)]
bpf,x64: Pad NOPs to make images converge more easily
The x64 bpf jit expects bpf images converge within the given passes, but
it could fail to do so with some corner cases. For example:
l0: ja 40
l1: ja 40
[... repeated ja 40 ]
l39: ja 40
l40: ret #0
This bpf program contains 40 "ja 40" instructions which are effectively
NOPs and designed to be replaced with valid code dynamically. Ideally,
bpf jit should optimize those "ja 40" instructions out when translating
the bpf instructions into x64 machine code. However, do_jit() can only
remove one "ja 40" for offset==0 on each pass, so it requires at least
40 runs to eliminate those JMPs and exceeds the current limit of
passes(20). In the end, the program got rejected when BPF_JIT_ALWAYS_ON
is set even though it's legit as a classic socket filter.
To make bpf images more likely converge within 20 passes, this commit
pads some instructions with NOPs in the last 5 passes:
1. conditional jumps
A possible size variance comes from the adoption of imm8 JMP. If the
offset is imm8, we calculate the size difference of this BPF instruction
between the previous and the current pass and fill the gap with NOPs.
To avoid the recalculation of jump offset, those NOPs are inserted before
the JMP code, so we have to subtract the 2 bytes of imm8 JMP when
calculating the NOP number.
2. BPF_JA
There are two conditions for BPF_JA.
a.) nop jumps
If this instruction is not optimized out in the previous pass,
instead of removing it, we insert the equivalent size of NOPs.
b.) label jumps
Similar to condition jumps, we prepend NOPs right before the JMP
code.
To make the code concise, emit_nops() is modified to use the signed len and
return the number of inserted NOPs.
For bpf-to-bpf, we always enable padding for the extra pass since there
is only one extra run and the jump padding doesn't affected the images
that converge without padding.
After applying this patch, the corner case was loaded with the following
jit code:
flen=45 proglen=77 pass=17 image=
ffffffffc03367d4 from=jump pid=10097
JIT code:
00000000: 0f 1f 44 00 00 55 48 89 e5 53 41 55 31 c0 45 31
JIT code:
00000010: ed 48 89 fb eb 30 eb 2e eb 2c eb 2a eb 28 eb 26
JIT code:
00000020: eb 24 eb 22 eb 20 eb 1e eb 1c eb 1a eb 18 eb 16
JIT code:
00000030: eb 14 eb 12 eb 10 eb 0e eb 0c eb 0a eb 08 eb 06
JIT code:
00000040: eb 04 eb 02 66 90 31 c0 41 5d 5b c9 c3
0: 0f 1f 44 00 00 nop DWORD PTR [rax+rax*1+0x0]
5: 55 push rbp
6: 48 89 e5 mov rbp,rsp
9: 53 push rbx
a: 41 55 push r13
c: 31 c0 xor eax,eax
e: 45 31 ed xor r13d,r13d
11: 48 89 fb mov rbx,rdi
14: eb 30 jmp 0x46
16: eb 2e jmp 0x46
...
3e: eb 06 jmp 0x46
40: eb 04 jmp 0x46
42: eb 02 jmp 0x46
44: 66 90 xchg ax,ax
46: 31 c0 xor eax,eax
48: 41 5d pop r13
4a: 5b pop rbx
4b: c9 leave
4c: c3 ret
At the 16th pass, 15 jumps were already optimized out, and one jump was
replaced with NOPs at 44 and the image converged at the 17th pass.
v4:
- Add the detailed comments about the possible padding bytes
v3:
- Copy the instructions of prologue separately or the size calculation
of the first BPF instruction would include the prologue.
- Replace WARN_ONCE() with pr_err() and EFAULT
- Use MAX_PASSES in the for loop condition check
- Remove the "padded" flag from x64_jit_data. For the extra pass of
subprogs, padding is always enabled since it won't hurt the images
that converge without padding.
v2:
- Simplify the sample code in the description and provide the jit code
- Check the expected padding bytes with WARN_ONCE
- Move the 'padded' flag to 'struct x64_jit_data'
Signed-off-by: Gary Lin <glin@suse.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210119102501.511-2-glin@suse.com
Lukas Bulwahn [Mon, 18 Jan 2021 08:00:04 +0000 (09:00 +0100)]
docs, bpf: Add minimal markup to address doc warning
Commit
91c960b00566 ("bpf: Rename BPF_XADD and prepare to encode other
atomics in .imm") modified the BPF documentation, but missed some ReST
markup.
Hence, make htmldocs warns on Documentation/networking/filter.rst:1053:
WARNING: Inline emphasis start-string without end-string.
Add some minimal markup to address this warning.
Fixes:
91c960b00566 ("bpf: Rename BPF_XADD and prepare to encode other atomics in .imm")
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Brendan Jackman <jackmanb@google.com>
Link: https://lore.kernel.org/bpf/20210118080004.6367-1-lukas.bulwahn@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Björn Töpel [Mon, 18 Jan 2021 09:17:53 +0000 (10:17 +0100)]
samples/bpf: Add BPF_ATOMIC_OP macro for BPF samples
Brendan Jackman added extend atomic operations to the BPF instruction
set in commit
7064a7341a0d ("Merge branch 'Atomics for eBPF'"), which
introduces the BPF_ATOMIC_OP macro. However, that macro was missing
for the BPF samples. Fix that by adding it into bpf_insn.h.
Fixes:
91c960b00566 ("bpf: Rename BPF_XADD and prepare to encode other atomics in .imm")
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Link: https://lore.kernel.org/bpf/20210118091753.107572-1-bjorn.topel@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Tue, 12 Jan 2021 18:26:13 +0000 (19:26 +0100)]
net, xdp: Introduce xdp_build_skb_from_frame utility routine
Introduce xdp_build_skb_from_frame utility routine to build the skb
from xdp_frame. Respect to __xdp_build_skb_from_frame,
xdp_build_skb_from_frame will allocate the skb object. Rely on
xdp_build_skb_from_frame in veth driver.
Introduce missing xdp metadata support in veth_xdp_rcv_one routine.
Add missing metadata support in veth_xdp_rcv_one().
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/94ade9e853162ae1947941965193190da97457bc.1610475660.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>