Song Liu [Mon, 26 Sep 2022 18:47:39 +0000 (11:47 -0700)]
bpf: Enforce W^X for bpf trampoline
Mark the trampoline as RO+X after arch_prepare_bpf_trampoline, so that
the trampoine follows W^X rule strictly. This will turn off warnings like
CPA refuse W^X violation:
8000000000000163 ->
0000000000000163 range: ...
Also remove bpf_jit_alloc_exec_page(), since it is not used any more.
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220926184739.3512547-3-song@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Song Liu [Mon, 26 Sep 2022 18:47:38 +0000 (11:47 -0700)]
bpf: use bpf_prog_pack for bpf_dispatcher
Allocate bpf_dispatcher with bpf_prog_pack_alloc so that bpf_dispatcher
can share pages with bpf programs.
arch_prepare_bpf_dispatcher() is updated to provide a RW buffer as working
area for arch code to write to.
This also fixes CPA W^X warnning like:
CPA refuse W^X violation:
8000000000000163 ->
0000000000000163 range: ...
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220926184739.3512547-2-song@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Tue, 27 Sep 2022 03:30:40 +0000 (20:30 -0700)]
Merge branch 'bpf: Fixes for CONFIG_X86_KERNEL_IBT'
Jiri Olsa says:
====================
Martynas reported bpf_get_func_ip returning +4 address when
CONFIG_X86_KERNEL_IBT option is enabled and I found there are
some failing bpf tests when this option is enabled.
The CONFIG_X86_KERNEL_IBT option adds endbr instruction at the
function entry, so the idea is to 'fix' entry ip for kprobe_multi
and trampoline probes, because they are placed on the function
entry.
v5 changes:
- updated uapi/linux/bpf.h headers with comment for
bpf_get_func_ip returning 0 [Andrii]
- added acks
v4 changes:
- used get_kernel_nofault to read previous instruction [Peter]
- used movabs instruction in trampoline comment [Peter]
- renamed fentry_ip argument in kprobe_multi_link_handler [Peter]
v3 changes:
- using 'unused' bpf function to get IBT config option
into selftest skeleton
- rebased to current bpf-next/master
- added ack/review from Masami
v2 changes:
- change kprobes get_func_ip to return zero for kprobes
attached within the function body [Andrii]
- detect IBT config and properly test kprobe with offset
[Andrii]
v1 changes:
- read previous instruction in kprobe_multi link handler
and adjust entry_ip for CONFIG_X86_KERNEL_IBT option
- split first patch into 2 separate changes
- update changelogs
====================
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:40 +0000 (17:33 +0200)]
selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
won't work because of the extra endbr instruction.
As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
and using appropriate offset value based on that.
Also removing test7 program, because it does the same as test6.
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-7-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:39 +0000 (17:33 +0200)]
bpf: Return value in kprobe get_func_ip only for entry address
Changing return value of kprobe's version of bpf_get_func_ip
to return zero if the attach address is not on the function's
entry point.
For kprobes attached in the middle of the function we can't easily
get to the function address especially now with the CONFIG_X86_KERNEL_IBT
support.
If user cares about current IP for kprobes attached within the
function body, they can get it with PT_REGS_IP(ctx).
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Martynas Pumputis <m@lambda.lt>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-6-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:38 +0000 (17:33 +0200)]
bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT
Martynas reported bpf_get_func_ip returning +4 address when
CONFIG_X86_KERNEL_IBT option is enabled.
When CONFIG_X86_KERNEL_IBT is enabled we'll have endbr instruction
at the function entry, which screws return value of bpf_get_func_ip()
helper that should return the function address.
There's short term workaround for kprobe_multi bpf program made by
Alexei [1], but we need this fixup also for bpf_get_attach_cookie,
that returns cookie based on the entry_ip value.
Moving the fixup in the fprobe handler, so both bpf_get_func_ip
and bpf_get_attach_cookie get expected function address when
CONFIG_X86_KERNEL_IBT option is enabled.
Also renaming kprobe_multi_link_handler entry_ip argument to fentry_ip
so it's clearer this is an ftrace __fentry__ ip.
[1] commit
7f0059b58f02 ("selftests/bpf: Fix kprobe_multi test.")
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Martynas Pumputis <m@lambda.lt>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-5-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:37 +0000 (17:33 +0200)]
bpf: Use given function address for trampoline ip arg
Using function address given at the generation time as the trampoline
ip argument. This way we get directly the function address that we
need, so we don't need to:
- read the ip from the stack
- subtract X86_PATCH_SIZE
- subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
which is not even implemented yet ;-)
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-4-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:36 +0000 (17:33 +0200)]
ftrace: Keep the resolved addr in kallsyms_callback
Keeping the resolved 'addr' in kallsyms_callback, instead of taking
ftrace_location value, because we depend on symbol address in the
cookie related code.
With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
from symbol address, which screwes the symbol address cookies matching.
There are 2 users of this function:
- bpf_kprobe_multi_link_attach
for which this fix is for
- get_ftrace_locations
which is used by register_fprobe_syms
this function needs to get symbols resolved to addresses,
but does not need 'ftrace location addresses' at this point
there's another ftrace location translation in the path done
by ftrace_set_filter_ips call:
register_fprobe_syms
addrs = get_ftrace_locations
register_fprobe_ips(addrs)
...
ftrace_set_filter_ips
...
__ftrace_match_addr
ip = ftrace_location(ip);
...
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Mon, 26 Sep 2022 15:33:35 +0000 (17:33 +0200)]
kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
attach address is on function entry. This is used in following
changes in get_func_ip helper to return correct function address.
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220926153340.1621984-2-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Liu Jian [Wed, 7 Sep 2022 07:13:11 +0000 (15:13 +0800)]
skmsg: Schedule psock work if the cached skb exists on the psock
In sk_psock_backlog function, for ingress direction skb, if no new data
packet arrives after the skb is cached, the cached skb does not have a
chance to be added to the receive queue of psock. As a result, the cached
skb cannot be received by the upper-layer application. Fix this by reschedule
the psock work to dispose the cached skb in sk_msg_recvmsg function.
Fixes:
604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220907071311.60534-1-liujian56@huawei.com
Liu Jian [Tue, 23 Aug 2022 13:37:55 +0000 (21:37 +0800)]
selftests/bpf: Add wait send memory test for sockmap redirect
Add one test for wait redirect sock's send memory test for sockmap.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220823133755.314697-3-liujian56@huawei.com
Liu Jian [Tue, 23 Aug 2022 13:37:54 +0000 (21:37 +0800)]
net: If sock is dead don't access sock's sk_wq in sk_stream_wait_memory
Fixes the below NULL pointer dereference:
[...]
[ 14.471200] Call Trace:
[ 14.471562] <TASK>
[ 14.471882] lock_acquire+0x245/0x2e0
[ 14.472416] ? remove_wait_queue+0x12/0x50
[ 14.473014] ? _raw_spin_lock_irqsave+0x17/0x50
[ 14.473681] _raw_spin_lock_irqsave+0x3d/0x50
[ 14.474318] ? remove_wait_queue+0x12/0x50
[ 14.474907] remove_wait_queue+0x12/0x50
[ 14.475480] sk_stream_wait_memory+0x20d/0x340
[ 14.476127] ? do_wait_intr_irq+0x80/0x80
[ 14.476704] do_tcp_sendpages+0x287/0x600
[ 14.477283] tcp_bpf_push+0xab/0x260
[ 14.477817] tcp_bpf_sendmsg_redir+0x297/0x500
[ 14.478461] ? __local_bh_enable_ip+0x77/0xe0
[ 14.479096] tcp_bpf_send_verdict+0x105/0x470
[ 14.479729] tcp_bpf_sendmsg+0x318/0x4f0
[ 14.480311] sock_sendmsg+0x2d/0x40
[ 14.480822] ____sys_sendmsg+0x1b4/0x1c0
[ 14.481390] ? copy_msghdr_from_user+0x62/0x80
[ 14.482048] ___sys_sendmsg+0x78/0xb0
[ 14.482580] ? vmf_insert_pfn_prot+0x91/0x150
[ 14.483215] ? __do_fault+0x2a/0x1a0
[ 14.483738] ? do_fault+0x15e/0x5d0
[ 14.484246] ? __handle_mm_fault+0x56b/0x1040
[ 14.484874] ? lock_is_held_type+0xdf/0x130
[ 14.485474] ? find_held_lock+0x2d/0x90
[ 14.486046] ? __sys_sendmsg+0x41/0x70
[ 14.486587] __sys_sendmsg+0x41/0x70
[ 14.487105] ? intel_pmu_drain_pebs_core+0x350/0x350
[ 14.487822] do_syscall_64+0x34/0x80
[ 14.488345] entry_SYSCALL_64_after_hwframe+0x63/0xcd
[...]
The test scenario has the following flow:
thread1 thread2
----------- ---------------
tcp_bpf_sendmsg
tcp_bpf_send_verdict
tcp_bpf_sendmsg_redir sock_close
tcp_bpf_push_locked __sock_release
tcp_bpf_push //inet_release
do_tcp_sendpages sock->ops->release
sk_stream_wait_memory // tcp_close
sk_wait_event sk->sk_prot->close
release_sock(__sk);
***
lock_sock(sk);
__tcp_close
sock_orphan(sk)
sk->sk_wq = NULL
release_sock
****
lock_sock(__sk);
remove_wait_queue(sk_sleep(sk), &wait);
sk_sleep(sk)
//NULL pointer dereference
&rcu_dereference_raw(sk->sk_wq)->wait
While waiting for memory in thread1, the socket is released with its wait
queue because thread2 has closed it. This caused by tcp_bpf_send_verdict
didn't increase the f_count of psock->sk_redir->sk_socket->file in thread1.
We should check if SOCK_DEAD flag is set on wakeup in sk_stream_wait_memory
before accessing the wait queue.
Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/bpf/20220823133755.314697-2-liujian56@huawei.com
Alexei Starovoitov [Sat, 24 Sep 2022 01:14:45 +0000 (18:14 -0700)]
Merge branch 'veristat: further usability improvements'
Andrii Nakryiko says:
====================
A small patch set adding few usability improvements and features making
veristat a more convenient tool to be used for work on BPF verifier:
- patch #2 speeds up and makes stats parsing from BPF verifier log more
robust;
- patch #3 makes veristat less strict about input object files; veristat
will ignore non-BPF ELF files;
- patch #4 adds progress log, by default, so that user doing
mass-verification is aware that veristat is not stuck;
- patch #5 allows to tune requested BPF verifier log level, which makes
veristat a simplest way to get BPF verifier log, especially successfully
verified ones.
v1->v2:
- don't emit progress in non-table mode, as it breaks CSV output.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 17:59:13 +0000 (10:59 -0700)]
selftests/bpf: allow to adjust BPF verifier log level in veristat
Add -l (--log-level) flag to override default BPF verifier log lever.
This only matters in verbose mode, which is the mode in which veristat
emits verifier log for each processed BPF program.
This is important because for successfully verified BPF programs
log_level 1 is empty, as BPF verifier truncates all the successfully
verified paths. So -l2 is the only way to actually get BPF verifier log
in practice. It looks sometihng like this:
[vmuser@archvm bpf]$ sudo ./veristat xdp_tx.bpf.o -vl2
Processing 'xdp_tx.bpf.o'...
PROCESSING xdp_tx.bpf.o/xdp_tx, DURATION US: 19, VERDICT: success, VERIFIER LOG:
func#0 @0
0: R1=ctx(off=0,imm=0) R10=fp0
; return XDP_TX;
0: (b4) w0 = 3 ; R0_w=3
1: (95) exit
verification time 19 usec
stack depth 0
processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
File Program Verdict Duration (us) Total insns Total states Peak states
------------ ------- ------- ------------- ----------- ------------ -----------
xdp_tx.bpf.o xdp_tx success 19 2 0 0
------------ ------- ------- ------------- ----------- ------------ -----------
Done. Processed 1 files, 0 programs. Skipped 1 files, 0 programs.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-6-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 17:59:12 +0000 (10:59 -0700)]
selftests/bpf: emit processing progress and add quiet mode to veristat
Emit "Processing <filepath>..." for each BPF object file to be
processed, to show progress. But also add -q (--quiet) flag to silence
such messages. Doing something more clever (like overwriting same output
line) is to cumbersome and easily breakable if there is any other
console output (e.g., errors from libbpf).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 17:59:11 +0000 (10:59 -0700)]
selftests/bpf: make veristat skip non-BPF and failing-to-open BPF objects
Make veristat ignore non-BPF object files. This allows simpler
mass-verification (e.g., `sudo ./veristat *.bpf.o` in selftests/bpf
directory). Note that `sudo ./veristat *.o` would also work, but with
selftests's multiple copies of BPF object files (.bpf.o and
.bpf.linked{1,2,3}.o) it's 4x slower.
Also, given some of BPF object files could be incomplete in the sense
that they are meant to be statically linked into final BPF object file
(like linked_maps, linked_funcs, linked_vars), note such instances in
stderr, but proceed anyways. This seems like a better trade off between
completely silently ignoring BPF object file and aborting
mass-verification altogether.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 17:59:10 +0000 (10:59 -0700)]
selftests/bpf: make veristat's verifier log parsing faster and more robust
Make sure veristat doesn't spend ridiculous amount of time parsing
verifier stats from verifier log, especially for very large logs or
truncated logs (e.g., when verifier returns -ENOSPC due to too small
buffer). For this, parse lines from the end of the log and make sure we
parse only up to 100 last lines, where stats should be, if at all.
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 17:59:09 +0000 (10:59 -0700)]
selftests/bpf: add sign-file to .gitignore
Add sign-file to .gitignore to avoid accidentally checking it in.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923175913.3272430-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 23 Sep 2022 23:05:59 +0000 (16:05 -0700)]
libbpf: restore memory layout of bpf_object_open_opts
When attach_prog_fd field was removed in libbpf 1.0 and replaced with
`long: 0` placeholder, it actually shifted all the subsequent fields by
8 byte. This is due to `long: 0` promising to adjust next field's offset
to long-aligned offset. But in this case we were already long-aligned
as pin_root_path is a pointer. So `long: 0` had no effect, and thus
didn't feel the gap created by removed attach_prog_fd.
Non-zero bitfield should have been used instead. I validated using
pahole. Originally kconfig field was at offset 40. With `long: 0` it's
at offset 32, which is wrong. With this change it's back at offset 40.
While technically libbpf 1.0 is allowed to break backwards
compatibility and applications should have been recompiled against
libbpf 1.0 headers, but given how trivial it is to preserve memory
layout, let's fix this.
Reported-by: Grant Seltzer Richman <grantseltzer@gmail.com>
Fixes:
146bf811f5ac ("libbpf: remove most other deprecated high-level APIs")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220923230559.666608-1-andrii@kernel.org
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Wang Yufen [Thu, 22 Sep 2022 06:28:44 +0000 (14:28 +0800)]
libbpf: Add pathname_concat() helper
Move snprintf and len check to common helper pathname_concat() to make the
code simpler.
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1663828124-10437-1-git-send-email-wangyufen@huawei.com
Yosry Ahmed [Mon, 19 Sep 2022 17:53:30 +0000 (17:53 +0000)]
selftests/bpf: Simplify cgroup_hierarchical_stats selftest
The cgroup_hierarchical_stats selftest is complicated. It has to be,
because it tests an entire workflow of recording, aggregating, and
dumping cgroup stats. However, some of the complexity is unnecessary.
The test now enables the memory controller in a cgroup hierarchy, invokes
reclaim, measure reclaim time, THEN uses that reclaim time to test the
stats collection and aggregation. We don't need to use such a
complicated stat, as the context in which the stat is collected is
orthogonal.
Simplify the test by using a simple stat instead of reclaim time, the
total number of times a process has ever entered a cgroup. This makes
the test simpler and removes the dependency on the memory controller and
the memory reclaim interface.
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20220919175330.890793-1-yosryahmed@google.com
Martin KaFai Lau [Thu, 22 Sep 2022 23:11:35 +0000 (16:11 -0700)]
Merge branch 'Fix resource leaks in test_maps'
Hou Tao says:
====================
From: Hou Tao <houtao1@huawei.com>
Hi,
It is just a tiny patch set aims to fix the resource leaks in test_maps
after test case succeeds or is skipped. And these leaks are spotted by
using address sanitizer and checking the content of /proc/$pid/fd.
Please see indiviual patch for more details.
Change Log:
v2:
* Add the missing header file unistd.h for close() (From kernel-patches/bpf)
The reason Why I miss that is that -Werror is removed from Makefile
when enabling clang address sanitizer.
v1:
* https://lore.kernel.org/bpf/
20220921025855.115463-1-houtao@huaweicloud.com/T/
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Hou Tao [Wed, 21 Sep 2022 07:00:35 +0000 (15:00 +0800)]
selftests/bpf: Free the allocated resources after test case succeeds
Free the created fd or allocated bpf_object after test case succeeds,
else there will be resource leaks.
Spotted by using address sanitizer and checking the content of
/proc/$pid/fd directory.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921070035.2016413-3-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Hou Tao [Wed, 21 Sep 2022 07:00:34 +0000 (15:00 +0800)]
selftests/bpf: Destroy the skeleton when CONFIG_PREEMPT is off
Destroy the created skeleton when CONFIG_PREEMPT is off, else will be
resource leak.
Fixes:
73b97bc78b32 ("selftests/bpf: Test concurrent updates on bpf_task_storage_busy")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921070035.2016413-2-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Yauheni Kaliuta [Tue, 20 Sep 2022 16:14:09 +0000 (19:14 +0300)]
selftests/bpf: Add liburandom_read.so to TEST_GEN_FILES
Added urandom_read shared lib is missing from the list of installed
files what makes urandom_read test after `make install` or `make
gen_tar` broken.
Add the library to TEST_GEN_FILES. The names in the list do not
contain $(OUTPUT) since it's added by lib.mk code.
Fixes:
00a0fa2d7d49 ("selftests/bpf: Add urandom_read shared lib and USDTs")
Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920161409.129953-1-ykaliuta@redhat.com
Alexei Starovoitov [Thu, 22 Sep 2022 02:41:37 +0000 (19:41 -0700)]
Merge branch 'veristat: CSV output, comparison mode, filtering'
Andrii Nakryiko says:
====================
Add three more critical features to veristat tool, which make it sufficient
for a practical work on BPF verifier:
- CSV output, which allows easier programmatic post-processing of stats;
- building upon CSV output, veristat now supports comparison mode, in which
two previously captured CSV outputs from veristat are compared with each
other in a convenient form;
- flexible allow/deny filtering using globs for BPF object files and
programs, allowing to narrow down target BPF programs to be verified.
See individual patches for more details and examples.
v1->v2:
- split out double-free fix into patch #1 (Yonghong);
- fixed typo in verbose flag (Quentin);
- baseline and comparison stats were reversed in output table, fixed that.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 21 Sep 2022 16:42:54 +0000 (09:42 -0700)]
selftests/bpf: add ability to filter programs in veristat
Add -f (--filter) argument which accepts glob-based filters for
narrowing down what BPF object files and programs within them should be
processed by veristat. This filtering applies both to comparison and
main (verification) mode.
Filter can be of two forms:
- file (object) filter: 'strobemeta*'; in this case all the programs
within matching files are implicitly allowed (or denied, depending
if it's positive or negative rule, see below);
- file and prog filter: 'strobemeta*/*unroll*' will further filter
programs within matching files to only allow those program names that
match '*unroll*' glob.
As mentioned, filters can be positive (allowlisting) and negative
(denylisting). Negative filters should start with '!': '!strobemeta*'
will deny any filename which basename starts with "strobemeta".
Further, one extra special syntax is supported to allow more convenient
use in practice. Instead of specifying rule on the command line,
veristat allows to specify file that contains rules, both positive and
negative, one line per one filter. This is achieved with -f @<filepath>
use, where <filepath> points to a text file containing rules (negative
and positive rules can be mixed). For convenience empty lines and lines
starting with '#' are ignored. This feature is useful to have some
pre-canned list of object files and program names that are tested
repeatedly, allowing to check in a list of rules and quickly specify
them on the command line.
As a demonstration (and a short cut for nearest future), create a small
list of "interesting" BPF object files from selftests/bpf and commit it
as veristat.cfg. It currently includes 73 programs, most of which are
the most complex and largest BPF programs in selftests, as judged by
total verified instruction count and verifier states total.
If there is overlap between positive or negative filters, negative
filter takes precedence (denylisting is stronger than allowlisting). If
no allow filter is specified, veristat implicitly assumes '*/*' rule. If
no deny rule is specified, veristat (logically) assumes no negative
filters.
Also note that -f (just like -e and -s) can be specified multiple times
and their effect is cumulative.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 21 Sep 2022 16:42:53 +0000 (09:42 -0700)]
selftests/bpf: add comparison mode to veristat
Add ability to compare and contrast two veristat runs, previously
recorded with veristat using CSV output format.
When veristat is called with -C (--compare) flag, veristat expects
exactly two input files specified, both should be in CSV format.
Expectation is that it's output from previous veristat runs, but as long
as column names and formats match, it should just work. First CSV file
is designated as a "baseline" provided, and the second one is
comparison (experiment) data set. Establishing baseline matters later
when calculating difference percentages, see below.
Veristat parses these two CSV files and "reconstructs" verifier stats
(it could be just a subset of all possible stats). File and program
names are mandatory as they are used as joining key (these two "stats"
are designated as "key stats" in the code).
Veristat currently enforces that the set of stats recorded in both CSV
has to exactly match, down to exact order. This is just a simplifying
condition which can be lifted with a bit of additional pre-processing to
reorded stat specs internally, which I didn't bother doing, yet.
For all the non-key stats, veristat will output three columns: one for
baseline data, one for comparison data, and one with an absolute and
relative percentage difference. If either baseline or comparison values
are missing (that is, respective CSV file doesn't have a row with
*exactly* matching file and program name), those values are assumed to
be empty or zero. In such case relative percentages are forced to +100%
or -100% output, for consistency with a typical case.
Veristat's -e (--emit) and -s (--sort) specs still apply, so even if CSV
contains lots of stats, user can request to compare only a subset of
them (and specify desired column order as well). Similarly, both CSV and
human-readable table output is honored. Note that input is currently
always expected to be CSV.
Here's an example shell session, recording data for biosnoop tool on two
different kernels and comparing them afterwards, outputting data in table
format.
# on slightly older production kernel
$ sudo ./veristat biosnoop_bpf.o
File Program Verdict Duration (us) Total insns Total states Peak states
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
biosnoop_bpf.o blk_account_io_merge_bio success 37 24 1 1
biosnoop_bpf.o blk_account_io_start failure 0 0 0 0
biosnoop_bpf.o block_rq_complete success 76 104 6 6
biosnoop_bpf.o block_rq_insert success 83 85 7 7
biosnoop_bpf.o block_rq_issue success 79 85 7 7
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
Done. Processed 1 object files, 5 programs.
$ sudo ./veristat ~/local/tmp/fbcode-bpf-objs/biosnoop_bpf.o -o csv > baseline.csv
$ cat baseline.csv
file_name,prog_name,verdict,duration,total_insns,total_states,peak_states
biosnoop_bpf.o,blk_account_io_merge_bio,success,36,24,1,1
biosnoop_bpf.o,blk_account_io_start,failure,0,0,0,0
biosnoop_bpf.o,block_rq_complete,success,82,104,6,6
biosnoop_bpf.o,block_rq_insert,success,78,85,7,7
biosnoop_bpf.o,block_rq_issue,success,74,85,7,7
# on latest bpf-next kernel
$ sudo ./veristat biosnoop_bpf.o
File Program Verdict Duration (us) Total insns Total states Peak states
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
biosnoop_bpf.o blk_account_io_merge_bio success 31 24 1 1
biosnoop_bpf.o blk_account_io_start failure 0 0 0 0
biosnoop_bpf.o block_rq_complete success 76 104 6 6
biosnoop_bpf.o block_rq_insert success 83 91 7 7
biosnoop_bpf.o block_rq_issue success 74 91 7 7
-------------- ------------------------ ------- ------------- ----------- ------------ -----------
Done. Processed 1 object files, 5 programs.
$ sudo ./veristat biosnoop_bpf.o -o csv > comparison.csv
$ cat comparison.csv
file_name,prog_name,verdict,duration,total_insns,total_states,peak_states
biosnoop_bpf.o,blk_account_io_merge_bio,success,71,24,1,1
biosnoop_bpf.o,blk_account_io_start,failure,0,0,0,0
biosnoop_bpf.o,block_rq_complete,success,82,104,6,6
biosnoop_bpf.o,block_rq_insert,success,83,91,7,7
biosnoop_bpf.o,block_rq_issue,success,87,91,7,7
# now let's compare with human-readable output (note that no sudo needed)
# we also ignore verification duration in this case to shortned output
$ ./veristat -C baseline.csv comparison.csv -e file,prog,verdict,insns
File Program Verdict (A) Verdict (B) Verdict (DIFF) Total insns (A) Total insns (B) Total insns (DIFF)
-------------- ------------------------ ----------- ----------- -------------- --------------- --------------- ------------------
biosnoop_bpf.o blk_account_io_merge_bio success success MATCH 24 24 +0 (+0.00%)
biosnoop_bpf.o blk_account_io_start failure failure MATCH 0 0 +0 (+100.00%)
biosnoop_bpf.o block_rq_complete success success MATCH 104 104 +0 (+0.00%)
biosnoop_bpf.o block_rq_insert success success MATCH 91 85 -6 (-6.59%)
biosnoop_bpf.o block_rq_issue success success MATCH 91 85 -6 (-6.59%)
-------------- ------------------------ ----------- ----------- -------------- --------------- --------------- ------------------
While not particularly exciting example (it turned out to be kind of hard to
quickly find a nice example with significant difference just because of kernel
version bump), it should demonstrate main features.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 21 Sep 2022 16:42:52 +0000 (09:42 -0700)]
selftests/bpf: add CSV output mode for veristat
Teach veristat to output results as CSV table for easier programmatic
processing. Change what was --output/-o argument to now be --emit/-e.
And then use --output-format/-o <fmt> to specify output format.
Currently "table" and "csv" is supported, table being default.
For CSV output mode veristat is using spec identifiers as column names.
E.g., instead of "Total states" veristat uses "total_states" as a CSV
header name.
Internally veristat recognizes three formats, one of them
(RESFMT_TABLE_CALCLEN) is a special format instructing veristat to
calculate column widths for table output. This felt a bit cleaner and
more uniform than either creating separate functions just for this.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Wed, 21 Sep 2022 16:42:51 +0000 (09:42 -0700)]
selftests/bpf: fix double bpf_object__close() in veristate
bpf_object__close(obj) is called twice for BPF object files with single
BPF program in it. This causes crash. Fix this by not calling
bpf_object__close() unnecessarily.
Fixes:
c8bc5e050976 ("selftests/bpf: Add veristat tool for mass-verifying BPF object files")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220921164254.3630690-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Thu, 22 Sep 2022 02:25:26 +0000 (19:25 -0700)]
Merge branch 'Introduce bpf_ct_set_nat_info kfunc helper'
Lorenzo Bianconi says:
====================
Introduce bpf_ct_set_nat_info kfunc helper in order to set source and
destination nat addresses/ports in a new allocated ct entry not inserted
in the connection tracking table yet.
Introduce support for per-parameter trusted args.
Changes since v2:
- use int instead of a pointer for port in bpf_ct_set_nat_info signature
- modify KF_TRUSTED_ARGS definition in order to referenced pointer constraint
just for PTR_TO_BTF_ID
- drop patch 2/4
Changes since v1:
- enable CONFIG_NF_NAT in tools/testing/selftests/bpf/config
Kumar Kartikeya Dwivedi (1):
bpf: Tweak definition of KF_TRUSTED_ARGS
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Wed, 21 Sep 2022 16:48:27 +0000 (18:48 +0200)]
selftests/bpf: add tests for bpf_ct_set_nat_info kfunc
Introduce self-tests for bpf_ct_set_nat_info kfunc used to set the
source or destination nat addresses/ports.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/803e33294e247744d466943105879414344d3235.1663778601.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Wed, 21 Sep 2022 16:48:26 +0000 (18:48 +0200)]
net: netfilter: add bpf_ct_set_nat_info kfunc helper
Introduce bpf_ct_set_nat_info kfunc helper in order to set source and
destination nat addresses/ports in a new allocated ct entry not inserted
in the connection tracking table yet.
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/9567db2fdfa5bebe7b7cc5870f7a34549418b4fc.1663778601.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kumar Kartikeya Dwivedi [Wed, 21 Sep 2022 16:48:25 +0000 (18:48 +0200)]
bpf: Tweak definition of KF_TRUSTED_ARGS
Instead of forcing all arguments to be referenced pointers with non-zero
reg->ref_obj_id, tweak the definition of KF_TRUSTED_ARGS to mean that
only PTR_TO_BTF_ID (and socket types translated to PTR_TO_BTF_ID) have
that constraint, and require their offset to be set to 0.
The rest of pointer types are also accomodated in this definition of
trusted pointers, but with more relaxed rules regarding offsets.
The inherent meaning of setting this flag is that all kfunc pointer
arguments have a guranteed lifetime, and kernel object pointers
(PTR_TO_BTF_ID, PTR_TO_CTX) are passed in their unmodified form (with
offset 0). In general, this is not true for PTR_TO_BTF_ID as it can be
obtained using pointer walks.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Link: https://lore.kernel.org/r/cdede0043c47ed7a357f0a915d16f9ce06a1d589.1663778601.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hou Tao [Wed, 21 Sep 2022 07:38:26 +0000 (15:38 +0800)]
bpf: Always use raw spinlock for hash bucket lock
For a non-preallocated hash map on RT kernel, regular spinlock instead
of raw spinlock is used for bucket lock. The reason is that on RT kernel
memory allocation is forbidden under atomic context and regular spinlock
is sleepable under RT.
Now hash map has been fully converted to use bpf_map_alloc, and there
will be no synchronous memory allocation for non-preallocated hash map,
so it is safe to always use raw spinlock for bucket lock on RT. So
removing the usage of htab_use_raw_lock() and updating the comments
accordingly.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220921073826.2365800-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiri Olsa [Fri, 16 Sep 2022 07:19:14 +0000 (09:19 +0200)]
bpf: Prevent bpf program recursion for raw tracepoint probes
We got report from sysbot [1] about warnings that were caused by
bpf program attached to contention_begin raw tracepoint triggering
the same tracepoint by using bpf_trace_printk helper that takes
trace_printk_lock lock.
Call Trace:
<TASK>
? trace_event_raw_event_bpf_trace_printk+0x5f/0x90
bpf_trace_printk+0x2b/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
bpf_trace_printk+0x3f/0xe0
bpf_prog_a9aec6167c091eef_prog+0x1f/0x24
bpf_trace_run2+0x26/0x90
native_queued_spin_lock_slowpath+0x1c6/0x2b0
_raw_spin_lock_irqsave+0x44/0x50
__unfreeze_partials+0x5b/0x160
...
The can be reproduced by attaching bpf program as raw tracepoint on
contention_begin tracepoint. The bpf prog calls bpf_trace_printk
helper. Then by running perf bench the spin lock code is forced to
take slow path and call contention_begin tracepoint.
Fixing this by skipping execution of the bpf program if it's
already running, Using bpf prog 'active' field, which is being
currently used by trampoline programs for the same reason.
Moving bpf_prog_inc_misses_counter to syscall.c because
trampoline.c is compiled in just for CONFIG_BPF_JIT option.
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Reported-by: syzbot+2251879aa068ad9c960d@syzkaller.appspotmail.com
[1] https://lore.kernel.org/bpf/YxhFe3EwqchC%2FfYf@krava/T/#t
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220916071914.7156-1-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Thu, 22 Sep 2022 00:50:02 +0000 (17:50 -0700)]
Merge branch 'bpf: Add kfuncs for PKCS#7 signature verification'
Roberto Sassu says:
====================
One of the desirable features in security is the ability to restrict import
of data to a given system based on data authenticity. If data import can be
restricted, it would be possible to enforce a system-wide policy based on
the signing keys the system owner trusts.
This feature is widely used in the kernel. For example, if the restriction
is enabled, kernel modules can be plugged in only if they are signed with a
key whose public part is in the primary or secondary keyring.
For eBPF, it can be useful as well. For example, it might be useful to
authenticate data an eBPF program makes security decisions on.
After a discussion in the eBPF mailing list, it was decided that the stated
goal should be accomplished by introducing four new kfuncs:
bpf_lookup_user_key() and bpf_lookup_system_key(), for retrieving a keyring
with keys trusted for signature verification, respectively from its serial
and from a pre-determined ID; bpf_key_put(), to release the reference
obtained with the former two kfuncs, bpf_verify_pkcs7_signature(), for
verifying PKCS#7 signatures.
Other than the key serial, bpf_lookup_user_key() also accepts key lookup
flags, that influence the behavior of the lookup. bpf_lookup_system_key()
accepts pre-determined IDs defined in include/linux/verification.h.
bpf_key_put() accepts the new bpf_key structure, introduced to tell whether
the other structure member, a key pointer, is valid or not. The reason is
that verify_pkcs7_signature() also accepts invalid pointers, set with the
pre-determined ID, to select a system-defined keyring. key_put() must be
called only for valid key pointers.
Since the two key lookup functions allocate memory and one increments a key
reference count, they must be used in conjunction with bpf_key_put(). The
latter must be called only if the lookup functions returned a non-NULL
pointer. The verifier denies the execution of eBPF programs that don't
respect this rule.
The two key lookup functions should be used in alternative, depending on
the use case. While bpf_lookup_user_key() provides great flexibility, it
seems suboptimal in terms of security guarantees, as even if the eBPF
program is assumed to be trusted, the serial used to obtain the key pointer
might come from untrusted user space not choosing one that the system
administrator approves to enforce a mandatory policy.
bpf_lookup_system_key() instead provides much stronger guarantees,
especially if the pre-determined ID is not passed by user space but is
hardcoded in the eBPF program, and that program is signed. In this case,
bpf_verify_pkcs7_signature() will always perform signature verification
with a key that the system administrator approves, i.e. the primary,
secondary or platform keyring.
Nevertheless, key permission checks need to be done accurately. Since
bpf_lookup_user_key() cannot determine how a key will be used by other
kfuncs, it has to defer the permission check to the actual kfunc using the
key. It does it by calling lookup_user_key() with KEY_DEFER_PERM_CHECK as
needed permission. Later, bpf_verify_pkcs7_signature(), if called,
completes the permission check by calling key_validate(). It does not need
to call key_task_permission() with permission KEY_NEED_SEARCH, as it is
already done elsewhere by the key subsystem. Future kfuncs using the
bpf_key structure need to implement the proper checks as well.
Finally, the last kfunc, bpf_verify_pkcs7_signature(), accepts the data and
signature to verify as eBPF dynamic pointers, to minimize the number of
kfunc parameters, and the keyring with keys for signature verification as a
bpf_key structure, returned by one of the two key lookup functions.
bpf_lookup_user_key() and bpf_verify_pkcs7_signature() can be called only
from sleepable programs, because of memory allocation and crypto
operations. For example, the lsm.s/bpf attach point is suitable,
fexit/array_map_update_elem is not.
The correctness of implementation of the new kfuncs and of their usage is
checked with the introduced tests.
The patch set includes a patch from another author (dependency) for sake of
completeness. It is organized as follows.
Patch 1 from KP Singh allows kfuncs to be used by LSM programs. Patch 2
exports the bpf_dynptr definition through BTF. Patch 3 splits
is_dynptr_reg_valid_init() and introduces is_dynptr_type_expected(), to
know more precisely the cause of a negative result of a dynamic pointer
check. Patch 4 allows dynamic pointers to be used as kfunc parameters.
Patch 5 exports bpf_dynptr_get_size(), to obtain the real size of data
carried by a dynamic pointer. Patch 6 makes available for new eBPF kfuncs
and programs some key-related definitions. Patch 7 introduces the
bpf_lookup_*_key() and bpf_key_put() kfuncs. Patch 8 introduces the
bpf_verify_pkcs7_signature() kfunc. Patch 9 changes the testing kernel
configuration to compile everything as built-in. Finally, patches 10-13
introduce the tests.
Changelog
v17:
- Remove unnecessary typedefs in test_verify_pkcs7_sig.c (suggested by KP)
- Add patch to export bpf_dynptr through BTF (reported by KP)
- Rename u{8,16,32,64} variables to __u{8,16,32,64} in the tests, for
consistency with other eBPF programs (suggested by Yonghong)
v16:
- Remove comments in include/linux/key.h for KEY_LOOKUP_*
- Change kmalloc() flag from GFP_ATOMIC to GFP_KERNEL in
bpf_lookup_user_key(), as the kfunc needs anyway to be sleepable
(suggested by Kumar)
- Test passing a dynamic pointer with NULL data to
bpf_verify_pkcs7_signature() (suggested by Kumar)
v15:
- Add kfunc_dynptr_param test to deny list for s390x
v14:
- Explain that is_dynptr_type_expected() will be useful also for BTF
(suggested by Joanne)
- Rename KEY_LOOKUP_FLAGS_ALL to KEY_LOOKUP_ALL (suggested by Jarkko)
- Swap declaration of spi and dynptr_type in is_dynptr_type_expected()
(suggested by Joanne)
- Reimplement kfunc dynptr tests with a regular eBPF program instead of
executing them with test_verifier (suggested by Joanne)
- Make key lookup flags as enum so that they are automatically exported
through BTF (suggested by Alexei)
v13:
- Split is_dynptr_reg_valid_init() and introduce is_dynptr_type_expected()
to see if the dynamic pointer type passed as argument to a kfunc is
supported (suggested by Kumar)
- Add forward declaration of struct key in include/linux/bpf.h (suggested
by Song)
- Declare mask for key lookup flags, remove key_lookup_flags_check()
(suggested by Jarkko and KP)
- Allow only certain dynamic pointer types (currently, local) to be passed
as argument to kfuncs (suggested by Kumar)
- For each dynamic pointer parameter in kfunc, additionally check if the
passed pointer is to the stack (suggested by Kumar)
- Split the validity/initialization and dynamic pointer type check also in
the verifier, and adjust the expected error message in the test (a test
for an unexpected dynptr type passed to a helper cannot be added due to
missing suitable helpers, but this case has been tested manually)
- Add verifier tests to check the dynamic pointers passed as argument to
kfuncs (suggested by Kumar)
v12:
- Put lookup_key and verify_pkcs7_sig tests in deny list for s390x (JIT
does not support calling kernel function)
v11:
- Move stringify_struct() macro to include/linux/btf.h (suggested by
Daniel)
- Change kernel configuration options in
tools/testing/selftests/bpf/config* from =m to =y
v10:
- Introduce key_lookup_flags_check() and system_keyring_id_check() inline
functions to check parameters (suggested by KP)
- Fix descriptions and comment of key-related kfuncs (suggested by KP)
- Register kfunc set only once (suggested by Alexei)
- Move needed kernel options to the architecture-independent configuration
for testing
v9:
- Drop patch to introduce KF_SLEEPABLE kfunc flag (already merged)
- Rename valid_ptr member of bpf_key to has_ref (suggested by Daniel)
- Check dynamic pointers in kfunc definition with bpf_dynptr_kern struct
definition instead of string, to detect structure renames (suggested by
Daniel)
- Explicitly say that we permit initialized dynamic pointers in kfunc
definition (suggested by Daniel)
- Remove noinline __weak from kfuncs definition (reported by Daniel)
- Simplify key lookup flags check in bpf_lookup_user_key() (suggested by
Daniel)
- Explain the reason for deferring key permission check (suggested by
Daniel)
- Allocate memory with GFP_ATOMIC in bpf_lookup_system_key(), and remove
KF_SLEEPABLE kfunc flag from kfunc declaration (suggested by Daniel)
- Define only one kfunc set and remove the loop for registration
(suggested by Alexei)
v8:
- Define the new bpf_key structure to carry the key pointer and whether
that pointer is valid or not (suggested by Daniel)
- Drop patch to mark a kfunc parameter with the __maybe_null suffix
- Improve documentation of kfuncs
- Introduce bpf_lookup_system_key() to obtain a key pointer suitable for
verify_pkcs7_signature() (suggested by Daniel)
- Use the new kfunc registration API
- Drop patch to test the __maybe_null suffix
- Add tests for bpf_lookup_system_key()
v7:
- Add support for using dynamic and NULL pointers in kfunc (suggested by
Alexei)
- Add new kfunc-related tests
v6:
- Switch back to key lookup helpers + signature verification (until v5),
and defer permission check from bpf_lookup_user_key() to
bpf_verify_pkcs7_signature()
- Add additional key lookup test to illustrate the usage of the
KEY_LOOKUP_CREATE flag and validate the flags (suggested by Daniel)
- Make description of flags of bpf_lookup_user_key() more user-friendly
(suggested by Daniel)
- Fix validation of flags parameter in bpf_lookup_user_key() (reported by
Daniel)
- Rename bpf_verify_pkcs7_signature() keyring-related parameters to
user_keyring and system_keyring to make their purpose more clear
- Accept keyring-related parameters of bpf_verify_pkcs7_signature() as
alternatives (suggested by KP)
- Replace unsigned long type with u64 in helper declaration (suggested by
Daniel)
- Extend the bpf_verify_pkcs7_signature() test by calling the helper
without data, by ensuring that the helper enforces the keyring-related
parameters as alternatives, by ensuring that the helper rejects
inaccessible and expired keyrings, and by checking all system keyrings
- Move bpf_lookup_user_key() and bpf_key_put() usage tests to
ref_tracking.c (suggested by John)
- Call bpf_lookup_user_key() and bpf_key_put() only in sleepable programs
v5:
- Move KEY_LOOKUP_ to include/linux/key.h
for validation of bpf_verify_pkcs7_signature() parameter
- Remove bpf_lookup_user_key() and bpf_key_put() helpers, and the
corresponding tests
- Replace struct key parameter of bpf_verify_pkcs7_signature() with the
keyring serial and lookup flags
- Call lookup_user_key() and key_put() in bpf_verify_pkcs7_signature()
code, to ensure that the retrieved key is used according to the
permission requested at lookup time
- Clarified keyring precedence in the description of
bpf_verify_pkcs7_signature() (suggested by John)
- Remove newline in the second argument of ASSERT_
- Fix helper prototype regular expression in bpf_doc.py
v4:
- Remove bpf_request_key_by_id(), don't return an invalid pointer that
other helpers can use
- Pass the keyring ID (without ULONG_MAX, suggested by Alexei) to
bpf_verify_pkcs7_signature()
- Introduce bpf_lookup_user_key() and bpf_key_put() helpers (suggested by
Alexei)
- Add lookup_key_norelease test, to ensure that the verifier blocks eBPF
programs which don't decrement the key reference count
- Parse raw PKCS#7 signature instead of module-style signature in the
verify_pkcs7_signature test (suggested by Alexei)
- Parse kernel module in user space and pass raw PKCS#7 signature to the
eBPF program for signature verification
v3:
- Rename bpf_verify_signature() back to bpf_verify_pkcs7_signature() to
avoid managing different parameters for each signature verification
function in one helper (suggested by Daniel)
- Use dynamic pointers and export bpf_dynptr_get_size() (suggested by
Alexei)
- Introduce bpf_request_key_by_id() to give more flexibility to the caller
of bpf_verify_pkcs7_signature() to retrieve the appropriate keyring
(suggested by Alexei)
- Fix test by reordering the gcc command line, always compile sign-file
- Improve helper support check mechanism in the test
v2:
- Rename bpf_verify_pkcs7_signature() to a more generic
bpf_verify_signature() and pass the signature type (suggested by KP)
- Move the helper and prototype declaration under #ifdef so that user
space can probe for support for the helper (suggested by Daniel)
- Describe better the keyring types (suggested by Daniel)
- Include linux/bpf.h instead of vmlinux.h to avoid implicit or
redeclaration
- Make the test selfcontained (suggested by Alexei)
v1:
- Don't define new map flag but introduce simple wrapper of
verify_pkcs7_signature() (suggested by Alexei and KP)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:51 +0000 (09:59 +0200)]
selftests/bpf: Add tests for dynamic pointers parameters in kfuncs
Add tests to ensure that only supported dynamic pointer types are accepted,
that the passed argument is actually a dynamic pointer, that the passed
argument is a pointer to the stack, and that bpf_verify_pkcs7_signature()
correctly handles dynamic pointers with data set to NULL.
The tests are currently in the deny list for s390x (JIT does not support
calling kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-14-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:50 +0000 (09:59 +0200)]
selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfunc
Perform several tests to ensure the correct implementation of the
bpf_verify_pkcs7_signature() kfunc.
Do the tests with data signed with a generated testing key (by using
sign-file from scripts/) and with the tcp_bic.ko kernel module if it is
found in the system. The test does not fail if tcp_bic.ko is not found.
First, perform an unsuccessful signature verification without data.
Second, perform a successful signature verification with the session
keyring and a new one created for testing.
Then, ensure that permission and validation checks are done properly on the
keyring provided to bpf_verify_pkcs7_signature(), despite those checks were
deferred at the time the keyring was retrieved with bpf_lookup_user_key().
The tests expect to encounter an error if the Search permission is removed
from the keyring, or the keyring is expired.
Finally, perform a successful and unsuccessful signature verification with
the keyrings with pre-determined IDs (the last test fails because the key
is not in the platform keyring).
The test is currently in the deny list for s390x (JIT does not support
calling kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Link: https://lore.kernel.org/r/20220920075951.929132-13-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:49 +0000 (09:59 +0200)]
selftests/bpf: Add additional tests for bpf_lookup_*_key()
Add a test to ensure that bpf_lookup_user_key() creates a referenced
special keyring when the KEY_LOOKUP_CREATE flag is passed to this function.
Ensure that the kfunc rejects invalid flags.
Ensure that a keyring can be obtained from bpf_lookup_system_key() when one
of the pre-determined keyring IDs is provided.
The test is currently blacklisted for s390x (JIT does not support calling
kernel function).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Link: https://lore.kernel.org/r/20220920075951.929132-12-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:48 +0000 (09:59 +0200)]
selftests/bpf: Add verifier tests for bpf_lookup_*_key() and bpf_key_put()
Add verifier tests for bpf_lookup_*_key() and bpf_key_put(), to ensure that
acquired key references stored in the bpf_key structure are released, that
a non-NULL bpf_key pointer is passed to bpf_key_put(), and that key
references are not leaked.
Also, slightly modify test_verifier.c, to find the BTF ID of the attach
point for the LSM program type (currently, it is done only for TRACING).
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-11-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:47 +0000 (09:59 +0200)]
selftests/bpf: Compile kernel with everything as built-in
Since the eBPF CI does not support kernel modules, change the kernel config
to compile everything as built-in.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Daniel Müller <deso@posteo.net>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-10-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:46 +0000 (09:59 +0200)]
bpf: Add bpf_verify_pkcs7_signature() kfunc
Add the bpf_verify_pkcs7_signature() kfunc, to give eBPF security modules
the ability to check the validity of a signature against supplied data, by
using user-provided or system-provided keys as trust anchor.
The new kfunc makes it possible to enforce mandatory policies, as eBPF
programs might be allowed to make security decisions only based on data
sources the system administrator approves.
The caller should provide the data to be verified and the signature as eBPF
dynamic pointers (to minimize the number of parameters) and a bpf_key
structure containing a reference to the keyring with keys trusted for
signature verification, obtained from bpf_lookup_user_key() or
bpf_lookup_system_key().
For bpf_key structures obtained from the former lookup function,
bpf_verify_pkcs7_signature() completes the permission check deferred by
that function by calling key_validate(). key_task_permission() is already
called by the PKCS#7 code.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-9-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:45 +0000 (09:59 +0200)]
bpf: Add bpf_lookup_*_key() and bpf_key_put() kfuncs
Add the bpf_lookup_user_key(), bpf_lookup_system_key() and bpf_key_put()
kfuncs, to respectively search a key with a given key handle serial number
and flags, obtain a key from a pre-determined ID defined in
include/linux/verification.h, and cleanup.
Introduce system_keyring_id_check() to validate the keyring ID parameter of
bpf_lookup_system_key().
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-8-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:44 +0000 (09:59 +0200)]
KEYS: Move KEY_LOOKUP_ to include/linux/key.h and define KEY_LOOKUP_ALL
In preparation for the patch that introduces the bpf_lookup_user_key() eBPF
kfunc, move KEY_LOOKUP_ definitions to include/linux/key.h, to be able to
validate the kfunc parameters. Add them to enum key_lookup_flag, so that
all the current ones and the ones defined in the future are automatically
exported through BTF and available to eBPF programs.
Also, add KEY_LOOKUP_ALL to the enum, with the logical OR of currently
defined flags as value, to facilitate checking whether a variable contains
only those flags.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-7-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:43 +0000 (09:59 +0200)]
bpf: Export bpf_dynptr_get_size()
Export bpf_dynptr_get_size(), so that kernel code dealing with eBPF dynamic
pointers can obtain the real size of data carried by this data structure.
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-6-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:42 +0000 (09:59 +0200)]
btf: Allow dynamic pointer parameters in kfuncs
Allow dynamic pointers (struct bpf_dynptr_kern *) to be specified as
parameters in kfuncs. Also, ensure that dynamic pointers passed as argument
are valid and initialized, are a pointer to the stack, and of the type
local. More dynamic pointer types can be supported in the future.
To properly detect whether a parameter is of the desired type, introduce
the stringify_struct() macro to compare the returned structure name with
the desired name. In addition, protect against structure renames, by
halting the build with BUILD_BUG_ON(), so that developers have to revisit
the code.
To check if a dynamic pointer passed to the kfunc is valid and initialized,
and if its type is local, export the existing functions
is_dynptr_reg_valid_init() and is_dynptr_type_expected().
Cc: Joanne Koong <joannelkoong@gmail.com>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-5-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:41 +0000 (09:59 +0200)]
bpf: Move dynptr type check to is_dynptr_type_expected()
Move dynptr type check to is_dynptr_type_expected() from
is_dynptr_reg_valid_init(), so that callers can better determine the cause
of a negative result (dynamic pointer not valid/initialized, dynamic
pointer of the wrong type). It will be useful for example for BTF, to
restrict which dynamic pointer types can be passed to kfuncs, as initially
only the local type will be supported.
Also, splitting makes the code more readable, since checking the dynamic
pointer type is not necessarily related to validity and initialization.
Split the validity/initialization and dynamic pointer type check also in
the verifier, and adjust the expected error message in the test (a test for
an unexpected dynptr type passed to a helper cannot be added due to missing
suitable helpers, but this case has been tested manually).
Cc: Joanne Koong <joannelkoong@gmail.com>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-4-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roberto Sassu [Tue, 20 Sep 2022 07:59:40 +0000 (09:59 +0200)]
btf: Export bpf_dynptr definition
eBPF dynamic pointers is a new feature recently added to upstream. It binds
together a pointer to a memory area and its size. The internal kernel
structure bpf_dynptr_kern is not accessible by eBPF programs in user space.
They instead see bpf_dynptr, which is then translated to the internal
kernel structure by the eBPF verifier.
The problem is that it is not possible to include at the same time the uapi
include linux/bpf.h and the vmlinux BTF vmlinux.h, as they both contain the
definition of some structures/enums. The compiler complains saying that the
structures/enums are redefined.
As bpf_dynptr is defined in the uapi include linux/bpf.h, this makes it
impossible to include vmlinux.h. However, in some cases, e.g. when using
kfuncs, vmlinux.h has to be included. The only option until now was to
include vmlinux.h and add the definition of bpf_dynptr directly in the eBPF
program source code from linux/bpf.h.
Solve the problem by using the same approach as for bpf_timer (which also
follows the same scheme with the _kern suffix for the internal kernel
structure).
Add the following line in one of the dynamic pointer helpers,
bpf_dynptr_from_mem():
BTF_TYPE_EMIT(struct bpf_dynptr);
Cc: stable@vger.kernel.org
Cc: Joanne Koong <joannelkoong@gmail.com>
Fixes:
97e03f521050c ("bpf: Add verifier support for dynptrs")
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
Tested-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/r/20220920075951.929132-3-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
KP Singh [Tue, 20 Sep 2022 07:59:39 +0000 (09:59 +0200)]
bpf: Allow kfuncs to be used in LSM programs
In preparation for the addition of new kfuncs, allow kfuncs defined in the
tracing subsystem to be used in LSM programs by mapping the LSM program
type to the TRACING hook.
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220920075951.929132-2-roberto.sassu@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tao Chen [Mon, 12 Sep 2022 16:43:00 +0000 (00:43 +0800)]
libbpf: Support raw BTF placed in the default search path
Currently, the default vmlinux files at '/boot/vmlinux-*',
'/lib/modules/*/vmlinux-*' etc. are parsed with 'btf__parse_elf()' to
extract BTF. It is possible that these files are actually raw BTF files
similar to /sys/kernel/btf/vmlinux. So parse these files with
'btf__parse' which tries both raw format and ELF format.
This might be useful in some scenarios where users put their custom BTF
into known locations and don't want to specify btf_custom_path option.
Signed-off-by: Tao Chen <chentao.kernel@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/3f59fb5a345d2e4f10e16fe9e35fbc4c03ecaa3e.1662999860.git.chentao.kernel@linux.alibaba.com
Yauheni Kaliuta [Thu, 8 Sep 2022 12:01:46 +0000 (15:01 +0300)]
selftests: bpf: test_kmod.sh: Pass parameters to the module
It's possible to specify particular tests for test_bpf.ko with
module parameters. Make it possible to pass the module parameters,
example:
test_kmod.sh test_range=1,3
Since magnitude tests take long time it can be reasonable to skip
them.
Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220908120146.381218-1-ykaliuta@redhat.com
Yonghong Song [Sat, 10 Sep 2022 02:52:14 +0000 (19:52 -0700)]
libbpf: Improve BPF_PROG2 macro code quality and description
Commit
34586d29f8df ("libbpf: Add new BPF_PROG2 macro") added BPF_PROG2
macro for trampoline based programs with struct arguments. Andrii
made a few suggestions to improve code quality and description.
This patch implemented these suggestions including better internal
macro name, consistent usage pattern for __builtin_choose_expr(),
simpler macro definition for always-inline func arguments and
better macro description.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20220910025214.1536510-1-yhs@fb.com
Andrii Nakryiko [Wed, 21 Sep 2022 23:24:18 +0000 (16:24 -0700)]
Merge branch 'bpf: Add user-space-publisher ring buffer map type'
David Vernet says:
====================
This patch set defines a new map type, BPF_MAP_TYPE_USER_RINGBUF, which
provides single-user-space-producer / single-kernel-consumer semantics over
a ring buffer. Along with the new map type, a helper function called
bpf_user_ringbuf_drain() is added which allows a BPF program to specify a
callback with the following signature, to which samples are posted by the
helper:
void (struct bpf_dynptr *dynptr, void *context);
The program can then use the bpf_dynptr_read() or bpf_dynptr_data() helper
functions to safely read the sample from the dynptr. There are currently no
helpers available to determine the size of the sample, but one could easily
be added if required.
On the user-space side, libbpf has been updated to export a new
'struct ring_buffer_user' type, along with the following symbols:
struct ring_buffer_user *
ring_buffer_user__new(int map_fd,
const struct ring_buffer_user_opts *opts);
void ring_buffer_user__free(struct ring_buffer_user *rb);
void *ring_buffer_user__reserve(struct ring_buffer_user *rb,
uint32_t size);
void *ring_buffer_user__poll(struct ring_buffer_user *rb, uint32_t size,
int timeout_ms);
void ring_buffer_user__discard(struct ring_buffer_user *rb, void *sample);
void ring_buffer_user__submit(struct ring_buffer_user *rb, void *sample);
These symbols are exported for inclusion in libbpf version 1.0.0.
Signed-off-by: David Vernet <void@manifault.com>
---
v5 -> v6:
- Fixed s/BPF_MAP_TYPE_RINGBUF/BPF_MAP_TYPE_USER_RINGBUF typo in the
libbpf user ringbuf doxygen header comment for ring_buffer_user__new()
(Andrii).
- Specify that pointer returned from ring_buffer_user__reserve() and its
blocking counterpart is 8-byte aligned (Andrii).
- Renamed user_ringbuf__commit() to user_ringbuf_commit(), as it's static
(Andrii).
- Another slight reworking of user_ring_buffer__reserve_blocking() to
remove some extraneous nanosecond variables + checking (Andrii).
- Add a final check of user_ring_buffer__reserve() in
user_ring_buffer__reserve_blocking().
- Moved busy bit lock / unlock logic from __bpf_user_ringbuf_peek() to
bpf_user_ringbuf_drain() (Andrii).
- -ENOSPC -> -ENODATA for an empty ring buffer in
__bpf_user_ringbuf_peek() (Andrii).
- Updated BPF_RB_FORCE_WAKEUP to only force a wakeup notification to be
sent if even if no sample was drained.
- Changed a bit of the wording in the UAPI header for
bpf_user_ringbuf_drain() to mention the BPF_RB_FORCE_WAKEUP behavior.
- Remove extra space after return in ringbuf_map_poll_user() (Andrii).
- Removed now-extraneous paragraph from the commit summary of patch 2/4
(Andrii).
v4 -> v5:
- DENYLISTed the user-ringbuf test suite on s390x. We have a number of
functions in the progs/user_ringbuf_success.c prog that user-space
fires by invoking a syscall. Not all of these syscalls are available
on s390x. If and when we add the ability to kick the kernel from
user-space, or if we end up using iterators for that per Hao's
suggestion, we could re-enable this test suite on s390x.
- Fixed a few more places that needed ringbuffer -> ring buffer.
v3 -> v4:
- Update BPF_MAX_USER_RINGBUF_SAMPLES to not specify a bit, and instead
just specify a number of samples. (Andrii)
- Update "ringbuffer" in comments and commit summaries to say "ring
buffer". (Andrii)
- Return -E2BIG from bpf_user_ringbuf_drain() both when a sample can't
fit into the ring buffer, and when it can't fit into a dynptr. (Andrii)
- Don't loop over samples in __bpf_user_ringbuf_peek() if a sample was
discarded. Instead, return -EAGAIN so the caller can deal with it. Also
updated the caller to detect -EAGAIN and skip over it when iterating.
(Andrii)
- Removed the heuristic for notifying user-space when a sample is drained,
causing the ring buffer to no longer be full. This may be useful in the
future, but is being removed now because it's strictly a heuristic.
- Re-add BPF_RB_FORCE_WAKEUP flag to bpf_user_ringbuf_drain(). (Andrii)
- Remove helper_allocated_dynptr tracker from verifier. (Andrii)
- Add libbpf function header comments to tools/lib/bpf/libbpf.h, so that
they will be included in rendered libbpf docs. (Andrii)
- Add symbols to a new LIBBPF_1.1.0 section in linker version script,
rather than including them in LIBBPF_1.0.0. (Andrii)
- Remove libbpf_err() calls from static libbpf functions. (Andrii)
- Check user_ring_buffer_opts instead of ring_buffer_opts in
user_ring_buffer__new(). (Andrii)
- Avoid an extra if in the hot path in user_ringbuf__commit(). (Andrii)
- Use ENOSPC rather than ENODATA if no space is available in the ring
buffer. (Andrii)
- Don't round sample size in header to 8, but still round size that is
reserved and written to 8, and validate positions are multiples of 8
(Andrii).
- Use nanoseconds for most calculations in
user_ring_buffer__reserve_blocking(). (Andrii)
- Don't use CHECK() in testcases, instead use ASSERT_*. (Andrii)
- Use SEC("?raw_tp") instead of SEC("?raw_tp/sys_nanosleep") in negative
test. (Andrii)
- Move test_user_ringbuf.h header to live next to BPF program instead of
a directory up from both it and the user-space test program. (Andrii)
- Update bpftool help message / docs to also include user_ringbuf.
v2 -> v3:
- Lots of formatting fixes, such as keeping things on one line if they fit
within 100 characters, and removing some extraneous newlines. Applies
to all diffs in the patch-set. (Andrii)
- Renamed ring_buffer_user__* symbols to user_ring_buffer__*. (Andrii)
- Added a missing smb_mb__before_atomic() in
__bpf_user_ringbuf_sample_release(). (Hao)
- Restructure how and when notification events are sent from the kernel to
the user-space producers via the .map_poll() callback for the
BPF_MAP_TYPE_USER_RINGBUF map. Before, we only sent a notification when
the ringbuffer was fully drained. Now, we guarantee user-space that
we'll send an event at least once per bpf_user_ringbuf_drain(), as long
as at least one sample was drained, and BPF_RB_NO_WAKEUP was not passed.
As a heuristic, we also send a notification event any time a sample being
drained causes the ringbuffer to no longer be full. (Andrii)
- Continuing on the above point, updated
user_ring_buffer__reserve_blocking() to loop around epoll_wait() until a
sufficiently large sample is found. (Andrii)
- Communicate BPF_RINGBUF_BUSY_BIT and BPF_RINGBUF_DISCARD_BIT in sample
headers. The ringbuffer implementation still only supports
single-producer semantics, but we can now add synchronization support in
user_ring_buffer__reserve(), and will automatically get multi-producer
semantics. (Andrii)
- Updated some commit summaries, specifically adding more details where
warranted. (Andrii)
- Improved function documentation for bpf_user_ringbuf_drain(), more
clearly explaining all function arguments and return types, as well as
the semantics for waking up user-space producers.
- Add function header comments for user_ring_buffer__reserve{_blocking}().
(Andrii)
- Rounding-up all samples to 8-bytes in the user-space producer, and
enforcing that all samples are properly aligned in the kernel. (Andrii)
- Added testcases that verify that bpf_user_ringbuf_drain() properly
validates samples, and returns error conditions if any invalid samples
are encountered. (Andrii)
- Move atomic_t busy field out of the consumer page, and into the
struct bpf_ringbuf. (Andrii)
- Split ringbuf_map_{mmap, poll}_{kern, user}() into separate
implementations. (Andrii)
- Don't silently consume errors in bpf_user_ringbuf_drain(). (Andrii)
- Remove magic number of samples (4096) from bpf_user_ringbuf_drain(),
and instead use BPF_MAX_USER_RINGBUF_SAMPLES macro, which allows
128k samples. (Andrii)
- Remove MEM_ALLOC modifier from PTR_TO_DYNPTR register in verifier, and
instead rely solely on the register being PTR_TO_DYNPTR. (Andrii)
- Move freeing of atomic_t busy bit to before we invoke irq_work_queue() in
__bpf_user_ringbuf_sample_release(). (Andrii)
- Only check for BPF_RB_NO_WAKEUP flag in bpf_ringbuf_drain().
- Remove libbpf function names from kernel smp_{load, store}* comments in
the kernel. (Andrii)
- Don't use double-underscore naming convention in libbpf functions.
(Andrii)
- Use proper __u32 and __u64 for types where we need to guarantee their
size. (Andrii)
v1 -> v2:
- Following Joanne landing
883743422ced ("bpf: Fix ref_obj_id for dynptr
data slices in verifier") [0], removed [PATCH 1/5] bpf: Clear callee
saved regs after updating REG0 [1]. (Joanne)
- Following the above adjustment, updated check_helper_call() to not store
a reference for bpf_dynptr_data() if the register containing the dynptr
is of type MEM_ALLOC. (Joanne)
- Fixed casting issue pointed out by kernel test robot by adding a missing
(uintptr_t) cast. (lkp)
[0] https://lore.kernel.org/all/
20220809214055.4050604-1-joannelkoong@gmail.com/
[1] https://lore.kernel.org/all/
20220808155341.2479054-1-void@manifault.com/
====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
David Vernet [Tue, 20 Sep 2022 00:01:00 +0000 (19:01 -0500)]
selftests/bpf: Add selftests validating the user ringbuf
This change includes selftests that validate the expected behavior and
APIs of the new BPF_MAP_TYPE_USER_RINGBUF map type.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-5-void@manifault.com
David Vernet [Tue, 20 Sep 2022 00:00:59 +0000 (19:00 -0500)]
bpf: Add libbpf logic for user-space ring buffer
Now that all of the logic is in place in the kernel to support user-space
produced ring buffers, we can add the user-space logic to libbpf. This
patch therefore adds the following public symbols to libbpf:
struct user_ring_buffer *
user_ring_buffer__new(int map_fd,
const struct user_ring_buffer_opts *opts);
void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
__u32 size, int timeout_ms);
void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample);
void user_ring_buffer__discard(struct user_ring_buffer *rb,
void user_ring_buffer__free(struct user_ring_buffer *rb);
A user-space producer must first create a struct user_ring_buffer * object
with user_ring_buffer__new(), and can then reserve samples in the
ring buffer using one of the following two symbols:
void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size);
void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb,
__u32 size, int timeout_ms);
With user_ring_buffer__reserve(), a pointer to a 'size' region of the ring
buffer will be returned if sufficient space is available in the buffer.
user_ring_buffer__reserve_blocking() provides similar semantics, but will
block for up to 'timeout_ms' in epoll_wait if there is insufficient space
in the buffer. This function has the guarantee from the kernel that it will
receive at least one event-notification per invocation to
bpf_ringbuf_drain(), provided that at least one sample is drained, and the
BPF program did not pass the BPF_RB_NO_WAKEUP flag to bpf_ringbuf_drain().
Once a sample is reserved, it must either be committed to the ring buffer
with user_ring_buffer__submit(), or discarded with
user_ring_buffer__discard().
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-4-void@manifault.com
David Vernet [Tue, 20 Sep 2022 00:00:58 +0000 (19:00 -0500)]
bpf: Add bpf_user_ringbuf_drain() helper
In a prior change, we added a new BPF_MAP_TYPE_USER_RINGBUF map type which
will allow user-space applications to publish messages to a ring buffer
that is consumed by a BPF program in kernel-space. In order for this
map-type to be useful, it will require a BPF helper function that BPF
programs can invoke to drain samples from the ring buffer, and invoke
callbacks on those samples. This change adds that capability via a new BPF
helper function:
bpf_user_ringbuf_drain(struct bpf_map *map, void *callback_fn, void *ctx,
u64 flags)
BPF programs may invoke this function to run callback_fn() on a series of
samples in the ring buffer. callback_fn() has the following signature:
long callback_fn(struct bpf_dynptr *dynptr, void *context);
Samples are provided to the callback in the form of struct bpf_dynptr *'s,
which the program can read using BPF helper functions for querying
struct bpf_dynptr's.
In order to support bpf_ringbuf_drain(), a new PTR_TO_DYNPTR register
type is added to the verifier to reflect a dynptr that was allocated by
a helper function and passed to a BPF program. Unlike PTR_TO_STACK
dynptrs which are allocated on the stack by a BPF program, PTR_TO_DYNPTR
dynptrs need not use reference tracking, as the BPF helper is trusted to
properly free the dynptr before returning. The verifier currently only
supports PTR_TO_DYNPTR registers that are also DYNPTR_TYPE_LOCAL.
Note that while the corresponding user-space libbpf logic will be added
in a subsequent patch, this patch does contain an implementation of the
.map_poll() callback for BPF_MAP_TYPE_USER_RINGBUF maps. This
.map_poll() callback guarantees that an epoll-waiting user-space
producer will receive at least one event notification whenever at least
one sample is drained in an invocation of bpf_user_ringbuf_drain(),
provided that the function is not invoked with the BPF_RB_NO_WAKEUP
flag. If the BPF_RB_FORCE_WAKEUP flag is provided, a wakeup
notification is sent even if no sample was drained.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-3-void@manifault.com
David Vernet [Tue, 20 Sep 2022 00:00:57 +0000 (19:00 -0500)]
bpf: Define new BPF_MAP_TYPE_USER_RINGBUF map type
We want to support a ringbuf map type where samples are published from
user-space, to be consumed by BPF programs. BPF currently supports a
kernel -> user-space circular ring buffer via the BPF_MAP_TYPE_RINGBUF
map type. We'll need to define a new map type for user-space -> kernel,
as none of the helpers exported for BPF_MAP_TYPE_RINGBUF will apply
to a user-space producer ring buffer, and we'll want to add one or
more helper functions that would not apply for a kernel-producer
ring buffer.
This patch therefore adds a new BPF_MAP_TYPE_USER_RINGBUF map type
definition. The map type is useless in its current form, as there is no
way to access or use it for anything until we one or more BPF helpers. A
follow-on patch will therefore add a new helper function that allows BPF
programs to run callbacks on samples that are published to the ring
buffer.
Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220920000100.477320-2-void@manifault.com
William Dean [Sat, 17 Sep 2022 08:42:48 +0000 (16:42 +0800)]
bpf: simplify code in btf_parse_hdr
It could directly return 'btf_check_sec_info' to simplify code.
Signed-off-by: William Dean <williamsukatube@163.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220917084248.3649-1-williamsukatube@163.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Xin Liu [Sat, 17 Sep 2022 08:48:09 +0000 (16:48 +0800)]
libbpf: Fix NULL pointer exception in API btf_dump__dump_type_data
We found that function btf_dump__dump_type_data can be called by the
user as an API, but in this function, the `opts` parameter may be used
as a null pointer.This causes `opts->indent_str` to trigger a NULL
pointer exception.
Fixes:
2ce8450ef5a3 ("libbpf: add bpf_object__open_{file, mem} w/ extensible opts")
Signed-off-by: Xin Liu <liuxin350@huawei.com>
Signed-off-by: Weibin Kong <kongweibin2@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220917084809.30770-1-liuxin350@huawei.com
Rong Tao [Sun, 11 Sep 2022 13:03:30 +0000 (21:03 +0800)]
samples/bpf: Replace blk_account_io_done() with __blk_account_io_done()
Since commit
be6bfe36db17 ("block: inline hot paths of blk_account_io_*()")
blk_account_io_*() become inline functions.
Signed-off-by: Rong Tao <rtoax@foxmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/tencent_1CC476835C219FACD84B6715F0D785517E07@qq.com
Martin KaFai Lau [Tue, 20 Sep 2022 21:30:35 +0000 (14:30 -0700)]
Merge branch 'bpf: Small nf_conn cleanups'
Daniel Xu says:
====================
This patchset cleans up a few small things:
* Delete unused stub
* Rename variable to be more descriptive
* Fix some `extern` declaration warnings
Past discussion:
- v2: https://lore.kernel.org/bpf/cover.
1663616584.git.dxu@dxuuu.xyz/
Changes since v2:
- Remove unused #include's
- Move #include <linux/filter.h> to .c
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Daniel Xu [Tue, 20 Sep 2022 14:15:24 +0000 (08:15 -0600)]
bpf: Move nf_conn extern declarations to filter.h
We're seeing the following new warnings on netdev/build_32bit and
netdev/build_allmodconfig_warn CI jobs:
../net/core/filter.c:8608:1: warning: symbol
'nf_conn_btf_access_lock' was not declared. Should it be static?
../net/core/filter.c:8611:5: warning: symbol 'nfct_bsa' was not
declared. Should it be static?
Fix by ensuring extern declaration is present while compiling filter.o.
Fixes:
864b656f82cc ("bpf: Add support for writing to nf_conn:mark")
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/2bd2e0283df36d8a4119605878edb1838d144174.1663683114.git.dxu@dxuuu.xyz
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Daniel Xu [Tue, 20 Sep 2022 14:15:23 +0000 (08:15 -0600)]
bpf: Rename nfct_bsa to nfct_btf_struct_access
The former name was a little hard to guess.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/73adc72385c8b162391fbfb404f0b6d4c5cc55d7.1663683114.git.dxu@dxuuu.xyz
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Daniel Xu [Tue, 20 Sep 2022 14:15:22 +0000 (08:15 -0600)]
bpf: Remove unused btf_struct_access stub
This stub was not being used anywhere.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/590e7bd6172ffe0f3d7b51cd40e8ded941aaf7e8.1663683114.git.dxu@dxuuu.xyz
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Hou Tao [Mon, 19 Sep 2022 14:48:11 +0000 (22:48 +0800)]
bpf: Check whether or not node is NULL before free it in free_bulk
llnode could be NULL if there are new allocations after the checking of
c-free_cnt > c->high_watermark in bpf_mem_refill() and before the
calling of __llist_del_first() in free_bulk (e.g. a PREEMPT_RT kernel
or allocation in NMI context). And it will incur oops as shown below:
BUG: kernel NULL pointer dereference, address:
0000000000000000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] PREEMPT_RT SMP
CPU: 39 PID: 373 Comm: irq_work/39 Tainted: G W 6.0.0-rc6-rt9+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
RIP: 0010:bpf_mem_refill+0x66/0x130
......
Call Trace:
<TASK>
irq_work_single+0x24/0x60
irq_work_run_list+0x24/0x30
run_irq_workd+0x18/0x20
smpboot_thread_fn+0x13f/0x2c0
kthread+0x121/0x140
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x1f/0x30
</TASK>
Simply fixing it by checking whether or not llnode is NULL in free_bulk().
Fixes:
8d5a8011b35d ("bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220919144811.3570825-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Hou Tao [Mon, 19 Sep 2022 03:57:14 +0000 (11:57 +0800)]
selftests/bpf: Add test result messages for test_task_storage_map_stress_lookup
Add test result message when test_task_storage_map_stress_lookup()
succeeds or is skipped. The test case can be skipped due to the choose
of preemption model in kernel config, so export skips in test_maps.c and
increase it when needed.
The following is the output of test_maps when the test case succeeds or
is skipped:
test_task_storage_map_stress_lookup:PASS
test_maps: OK, 0 SKIPPED
test_task_storage_map_stress_lookup SKIP (no CONFIG_PREEMPT)
test_maps: OK, 1 SKIPPED
Fixes:
73b97bc78b32 ("selftests/bpf: Test concurrent updates on bpf_task_storage_busy")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220919035714.2195144-1-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Peilin Ye [Fri, 16 Sep 2022 20:28:00 +0000 (13:28 -0700)]
bpf/btf: Use btf_type_str() whenever possible
We have btf_type_str(). Use it whenever possible in btf.c, instead of
"btf_kind_str[BTF_INFO_KIND(t->info)]".
Signed-off-by: Peilin Ye <peilin.ye@bytedance.com>
Link: https://lore.kernel.org/r/20220916202800.31421-1-yepeilin.cs@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Xin Liu [Tue, 13 Sep 2022 07:36:43 +0000 (15:36 +0800)]
libbpf: Clean up legacy bpf maps declaration in bpf_helpers
Legacy BPF map declarations are no longer supported in libbpf v1.0 [0].
Only BTF-defined maps are supported starting from v1.0, so it is time to
remove the definition of bpf_map_def in bpf_helpers.h.
[0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0
Signed-off-by: Xin Liu <liuxin350@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/bpf/20220913073643.19960-1-liuxin350@huawei.com
Andrii Nakryiko [Fri, 9 Sep 2022 19:30:53 +0000 (12:30 -0700)]
selftests/bpf: Add veristat tool for mass-verifying BPF object files
Add a small tool, veristat, that allows mass-verification of
a set of *libbpf-compatible* BPF ELF object files. For each such object
file, veristat will attempt to verify each BPF program *individually*.
Regardless of success or failure, it parses BPF verifier stats and
outputs them in human-readable table format. In the future we can also
add CSV and JSON output for more scriptable post-processing, if necessary.
veristat allows to specify a set of stats that should be output and
ordering between multiple objects and files (e.g., so that one can
easily order by total instructions processed, instead of default file
name, prog name, verdict, total instructions order).
This tool should be useful for validating various BPF verifier changes
or even validating different kernel versions for regressions.
Here's an example for some of the heaviest selftests/bpf BPF object
files:
$ sudo ./veristat -s insns,file,prog {pyperf,loop,test_verif_scale,strobemeta,test_cls_redirect,profiler}*.linked3.o
File Program Verdict Duration, us Total insns Total states Peak states
------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ -----------
loop3.linked3.o while_true failure 350990 1000001 9663 9663
test_verif_scale3.linked3.o balancer_ingress success 115244 845499 8636 2141
test_verif_scale2.linked3.o balancer_ingress success 77688 773445 3048 788
pyperf600.linked3.o on_event success 2079872 624585 30335 30241
pyperf600_nounroll.linked3.o on_event success 353972 568128 37101 2115
strobemeta.linked3.o on_event success 455230 557149 15915 13537
test_verif_scale1.linked3.o balancer_ingress success 89880 554754 8636 2141
strobemeta_nounroll2.linked3.o on_event success 433906 501725 17087 1912
loop6.linked3.o trace_virtqueue_add_sgs success 282205 398057 8717 919
loop1.linked3.o nested_loops success 125630 361349 5504 5504
pyperf180.linked3.o on_event success 2511740 160398 11470 11446
pyperf100.linked3.o on_event success 744329 87681 6213 6191
test_cls_redirect.linked3.o cls_redirect success 54087 78925 4782 903
strobemeta_subprogs.linked3.o on_event success 57898 65420 1954 403
test_cls_redirect_subprogs.linked3.o cls_redirect success 54522 64965 4619 958
strobemeta_nounroll1.linked3.o on_event success 43313 57240 1757 382
pyperf50.linked3.o on_event success 194355 46378 3263 3241
profiler2.linked3.o tracepoint__syscalls__sys_enter_kill success 23869 43372 1423 542
pyperf_subprogs.linked3.o on_event success 29179 36358 2499 2499
profiler1.linked3.o tracepoint__syscalls__sys_enter_kill success 13052 27036 1946 936
profiler3.linked3.o tracepoint__syscalls__sys_enter_kill success 21023 26016 2186 915
profiler2.linked3.o kprobe__vfs_link success 5255 13896 303 271
profiler1.linked3.o kprobe__vfs_link success 7792 12687 1042 1041
profiler3.linked3.o kprobe__vfs_link success 7332 10601 865 865
profiler2.linked3.o kprobe_ret__do_filp_open success 3417 8900 216 199
profiler2.linked3.o kprobe__vfs_symlink success 3548 8775 203 186
pyperf_global.linked3.o on_event success 10007 7563 520 520
profiler3.linked3.o kprobe_ret__do_filp_open success 4708 6464 532 532
profiler1.linked3.o kprobe_ret__do_filp_open success 3090 6445 508 508
profiler3.linked3.o kprobe__vfs_symlink success 4477 6358 521 521
profiler1.linked3.o kprobe__vfs_symlink success 3381 6347 507 507
profiler2.linked3.o raw_tracepoint__sched_process_exec success 2464 5874 292 189
profiler3.linked3.o raw_tracepoint__sched_process_exec success 2677 4363 397 283
profiler2.linked3.o kprobe__proc_sys_write success 1800 4355 143 138
profiler1.linked3.o raw_tracepoint__sched_process_exec success 1649 4019 333 240
pyperf600_bpf_loop.linked3.o on_event success 2711 3966 306 306
profiler2.linked3.o raw_tracepoint__sched_process_exit success 1234 3138 83 66
profiler3.linked3.o kprobe__proc_sys_write success 1755 2623 223 223
profiler1.linked3.o kprobe__proc_sys_write success 1222 2456 193 193
loop2.linked3.o while_true success 608 1783 57 30
profiler3.linked3.o raw_tracepoint__sched_process_exit success 789 1680 146 146
profiler1.linked3.o raw_tracepoint__sched_process_exit success 592 1526 133 133
strobemeta_bpf_loop.linked3.o on_event success 1015 1512 106 106
loop4.linked3.o combinations success 165 524 18 17
profiler3.linked3.o raw_tracepoint__sched_process_fork success 196 299 25 25
profiler1.linked3.o raw_tracepoint__sched_process_fork success 109 265 19 19
profiler2.linked3.o raw_tracepoint__sched_process_fork success 111 265 19 19
loop5.linked3.o while_true success 47 84 9 9
------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ -----------
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220909193053.577111-4-andrii@kernel.org
Andrii Nakryiko [Fri, 9 Sep 2022 19:30:52 +0000 (12:30 -0700)]
libbpf: Fix crash if SEC("freplace") programs don't have attach_prog_fd set
Fix SIGSEGV caused by libbpf trying to find attach type in vmlinux BTF
for freplace programs. It's wrong to search in vmlinux BTF and libbpf
doesn't even mark vmlinux BTF as required for freplace programs. So
trying to search anything in obj->vmlinux_btf might cause NULL
dereference if nothing else in BPF object requires vmlinux BTF.
Instead, error out if freplace (EXT) program doesn't specify
attach_prog_fd during at the load time.
Fixes:
91abb4a6d79d ("libbpf: Support attachment of BPF tracing programs to kernel modules")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220909193053.577111-3-andrii@kernel.org
Andrii Nakryiko [Fri, 9 Sep 2022 19:30:51 +0000 (12:30 -0700)]
selftests/bpf: Fix test_verif_scale{1,3} SEC() annotations
Use proper SEC("tc") for test_verif_scale{1,3} programs. It's not
a problem for selftests right now because we manually set type
programmatically, but not having correct SEC() definitions makes it
harded to generically load BPF object files.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220909193053.577111-2-andrii@kernel.org
Jiri Olsa [Sat, 3 Sep 2022 13:11:54 +0000 (15:11 +0200)]
bpf: Move bpf_dispatcher function out of ftrace locations
The dispatcher function is attached/detached to trampoline by
dispatcher update function. At the same time it's available as
ftrace attachable function.
After discussion [1] the proposed solution is to use compiler
attributes to alter bpf_dispatcher_##name##_func function:
- remove it from being instrumented with __no_instrument_function__
attribute, so ftrace has no track of it
- but still generate 5 nop instructions with patchable_function_entry(5)
attribute, which are expected by bpf_arch_text_poke used by
dispatcher update function
Enabling HAVE_DYNAMIC_FTRACE_NO_PATCHABLE option for x86, so
__patchable_function_entries functions are not part of ftrace/mcount
locations.
Adding attributes to bpf_dispatcher_XXX function on x86_64 so it's
kept out of ftrace locations and has 5 byte nop generated at entry.
These attributes need to be arch specific as pointed out by Ilya
Leoshkevic in here [2].
The dispatcher image is generated only for x86_64 arch, so the
code can stay as is for other archs.
[1] https://lore.kernel.org/bpf/
20220722110811.124515-1-jolsa@kernel.org/
[2] https://lore.kernel.org/bpf/
969a14281a7791c334d476825863ee449964dd0c.camel@linux.ibm.com/
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/bpf/20220903131154.420467-3-jolsa@kernel.org
Peter Zijlstra (Intel) [Sat, 3 Sep 2022 13:11:53 +0000 (15:11 +0200)]
ftrace: Add HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
x86 will shortly start using -fpatchable-function-entry for purposes
other than ftrace, make sure the __patchable_function_entry section
isn't merged in the mcount_loc section.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220903131154.420467-2-jolsa@kernel.org
Yauheni Kaliuta [Mon, 5 Sep 2022 09:01:49 +0000 (12:01 +0300)]
bpf: Use bpf_capable() instead of CAP_SYS_ADMIN for blinding decision
The full CAP_SYS_ADMIN requirement for blinding looks too strict nowadays.
These days given unprivileged BPF is disabled by default, the main users
for constant blinding coming from unprivileged in particular via cBPF -> eBPF
migration (e.g. old-style socket filters).
Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220831090655.156434-1-ykaliuta@redhat.com
Link: https://lore.kernel.org/bpf/20220905090149.61221-1-ykaliuta@redhat.com
Wang Yufen [Tue, 13 Sep 2022 08:40:33 +0000 (16:40 +0800)]
bpf: use kvmemdup_bpfptr helper
Use kvmemdup_bpfptr helper instead of open-coding to
simplify the code.
Signed-off-by: Wang Yufen <wangyufen@huawei.com>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/1663058433-14089-1-git-send-email-wangyufen@huawei.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Dave Marchevsky [Mon, 12 Sep 2022 15:45:44 +0000 (08:45 -0700)]
bpf: Add verifier check for BPF_PTR_POISON retval and arg
BPF_PTR_POISON was added in commit
c0a5a21c25f37 ("bpf: Allow storing
referenced kptr in map") to denote a bpf_func_proto btf_id which the
verifier will replace with a dynamically-determined btf_id at verification
time.
This patch adds verifier 'poison' functionality to BPF_PTR_POISON in
order to prepare for expanded use of the value to poison ret- and
arg-btf_id in ongoing work, namely rbtree and linked list patchsets
[0, 1]. Specifically, when the verifier checks helper calls, it assumes
that BPF_PTR_POISON'ed ret type will be replaced with a valid type before
- or in lieu of - the default ret_btf_id logic. Similarly for arg btf_id.
If poisoned btf_id reaches default handling block for either, consider
this a verifier internal error and fail verification. Otherwise a helper
w/ poisoned btf_id but no verifier logic replacing the type will cause a
crash as the invalid pointer is dereferenced.
Also move BPF_PTR_POISON to existing include/linux/posion.h header and
remove unnecessary shift.
[0]: lore.kernel.org/bpf/
20220830172759.4069786-1-davemarchevsky@fb.com
[1]: lore.kernel.org/bpf/
20220904204145.3089-1-memxor@gmail.com
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220912154544.1398199-1-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Dave Marchevsky [Thu, 8 Sep 2022 23:07:16 +0000 (16:07 -0700)]
bpf: Add verifier support for custom callback return range
Verifier logic to confirm that a callback function returns 0 or 1 was
added in commit
69c087ba6225b ("bpf: Add bpf_for_each_map_elem() helper").
At the time, callback return value was only used to continue or stop
iteration.
In order to support callbacks with a broader return value range, such as
those added in rbtree series[0] and others, add a callback_ret_range to
bpf_func_state. Verifier's helpers which set in_callback_fn will also
set the new field, which the verifier will later use to check return
value bounds.
Default to tnum_range(0, 0) instead of using tnum_unknown as a sentinel
value as the latter would prevent the valid range (0, U64_MAX) being
used. Previous global default tnum_range(0, 1) is explicitly set for
extant callback helpers. The change to global default was made after
discussion around this patch in rbtree series [1], goal here is to make
it more obvious that callback_ret_range should be explicitly set.
[0]: lore.kernel.org/bpf/
20220830172759.4069786-1-davemarchevsky@fb.com/
[1]: lore.kernel.org/bpf/
20220830172759.4069786-2-davemarchevsky@fb.com/
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20220908230716.2751723-1-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Lorenzo Bianconi [Thu, 8 Sep 2022 08:06:12 +0000 (10:06 +0200)]
selftests/bpf: fix ct status check in bpf_nf selftests
Check properly the connection tracking entry status configured running
bpf_ct_change_status kfunc.
Remove unnecessary IPS_CONFIRMED status configuration since it is
already done during entry allocation.
Fixes:
6eb7fba007a7 ("selftests/bpf: Add tests for new nf_conntrack kfuncs")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/813a5161a71911378dfac8770ec890428e4998aa.1662623574.git.lorenzo@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Sun, 11 Sep 2022 00:27:33 +0000 (17:27 -0700)]
Merge branch 'Support direct writes to nf_conn:mark'
Daniel Xu says:
====================
Support direct writes to nf_conn:mark from TC and XDP prog types. This
is useful when applications want to store per-connection metadata. This
is also particularly useful for applications that run both bpf and
iptables/nftables because the latter can trivially access this metadata.
One example use case would be if a bpf prog is responsible for advanced
packet classification and iptables/nftables is later used for routing
due to pre-existing/legacy code.
Past discussion:
- v4: https://lore.kernel.org/bpf/cover.
1661192455.git.dxu@dxuuu.xyz/
- v3: https://lore.kernel.org/bpf/cover.
1660951028.git.dxu@dxuuu.xyz/
- v2: https://lore.kernel.org/bpf/CAP01T74Sgn354dXGiFWFryu4vg+o8b9s9La1d9zEbC4LGvH4qg@mail.gmail.com/T/
- v1: https://lore.kernel.org/bpf/cover.
1660592020.git.dxu@dxuuu.xyz/
Changes since v4:
- Use exported function pointer + mutex to handle CONFIG_NF_CONNTRACK=m
case
Changes since v3:
- Use a mutex to protect module load/unload critical section
Changes since v2:
- Remove use of NOT_INIT for btf_struct_access write path
- Disallow nf_conn writing when nf_conntrack module not loaded
- Support writing to nf_conn___init:mark
Changes since v1:
- Add unimplemented stub for when !CONFIG_BPF_SYSCALL
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:41 +0000 (10:40 -0600)]
selftests/bpf: Add tests for writing to nf_conn:mark
Add a simple extension to the existing selftest to write to
nf_conn:mark. Also add a failure test for writing to unsupported field.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/f78966b81b9349d2b8ebb4cee2caf15cb6b38ee2.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:40 +0000 (10:40 -0600)]
bpf: Add support for writing to nf_conn:mark
Support direct writes to nf_conn:mark from TC and XDP prog types. This
is useful when applications want to store per-connection metadata. This
is also particularly useful for applications that run both bpf and
iptables/nftables because the latter can trivially access this metadata.
One example use case would be if a bpf prog is responsible for advanced
packet classification and iptables/nftables is later used for routing
due to pre-existing/legacy code.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/ebca06dea366e3e7e861c12f375a548cc4c61108.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:39 +0000 (10:40 -0600)]
bpf: Export btf_type_by_id() and bpf_log()
These symbols will be used in nf_conntrack.ko to support direct writes
to `nf_conn`.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/3c98c19dc50d3b18ea5eca135b4fc3a5db036060.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:38 +0000 (10:40 -0600)]
bpf: Use 0 instead of NOT_INIT for btf_struct_access() writes
Returning a bpf_reg_type only makes sense in the context of a BPF_READ.
For writes, prefer to explicitly return 0 for clarity.
Note that is non-functional change as it just so happened that NOT_INIT
== 0.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Link: https://lore.kernel.org/r/01772bc1455ae16600796ac78c6cc9fff34f95ff.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:37 +0000 (10:40 -0600)]
bpf: Add stub for btf_struct_access()
Add corresponding unimplemented stub for when CONFIG_BPF_SYSCALL=n
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/4021398e884433b1fef57a4d28361bb9fcf1bd05.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Xu [Wed, 7 Sep 2022 16:40:36 +0000 (10:40 -0600)]
bpf: Remove duplicate PTR_TO_BTF_ID RO check
Since commit
27ae7997a661 ("bpf: Introduce BPF_PROG_TYPE_STRUCT_OPS")
there has existed bpf_verifier_ops:btf_struct_access. When
btf_struct_access is _unset_ for a prog type, the verifier runs the
default implementation, which is to enforce read only:
if (env->ops->btf_struct_access) {
[...]
} else {
if (atype != BPF_READ) {
verbose(env, "only read is supported\n");
return -EACCES;
}
[...]
}
When btf_struct_access is _set_, the expectation is that
btf_struct_access has full control over accesses, including if writes
are allowed.
Rather than carve out an exception for each prog type that may write to
BTF ptrs, delete the redundant check and give full control to
btf_struct_access.
Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/962da2bff1238746589e332ff1aecc49403cd7ce.1662568410.git.dxu@dxuuu.xyz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Punit Agrawal [Wed, 7 Sep 2022 15:57:46 +0000 (16:57 +0100)]
bpf: Simplify code by using for_each_cpu_wrap()
In the percpu freelist code, it is a common pattern to iterate over
the possible CPUs mask starting with the current CPU. The pattern is
implemented using a hand rolled while loop with the loop variable
increment being open-coded.
Simplify the code by using for_each_cpu_wrap() helper to iterate over
the possible cpus starting with the current CPU. As a result, some of
the special-casing in the loop also gets simplified.
No functional change intended.
Signed-off-by: Punit Agrawal <punit.agrawal@bytedance.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20220907155746.1750329-1-punit.agrawal@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Tetsuo Handa [Sat, 10 Sep 2022 15:07:11 +0000 (00:07 +0900)]
bpf: add missing percpu_counter_destroy() in htab_map_alloc()
syzbot is reporting ODEBUG bug in htab_map_alloc() [1], for
commit
86fe28f7692d96d2 ("bpf: Optimize element count in non-preallocated
hash map.") added percpu_counter_init() to htab_map_alloc() but forgot to
add percpu_counter_destroy() to the error path.
Link: https://syzkaller.appspot.com/bug?extid=5d1da78b375c3b5e6c2b
Reported-by: syzbot <syzbot+5d1da78b375c3b5e6c2b@syzkaller.appspotmail.com>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Fixes:
86fe28f7692d96d2 ("bpf: Optimize element count in non-preallocated hash map.")
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/e2e4cc0e-9d36-4ca1-9bfa-ce23e6f8310b@I-love.SAKURA.ne.jp
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Martin KaFai Lau [Fri, 9 Sep 2022 17:40:45 +0000 (10:40 -0700)]
Merge branch 'cgroup/connect{4,6} programs for unprivileged ICMP ping'
YiFei Zhu says:
====================
Usually when a TCP/UDP connection is initiated, we can bind the socket
to a specific IP attached to an interface in a cgroup/connect hook.
But for pings, this is impossible, as the hook is not being called.
This series adds the invocation for cgroup/connect{4,6} programs to
unprivileged ICMP ping (i.e. ping sockets created with SOCK_DGRAM
IPPROTO_ICMP(V6) as opposed to SOCK_RAW). This also adds a test to
verify that the hooks are being called and invoking bpf_bind() from
within the hook actually binds the socket.
Patch 1 adds the invocation of the hook.
Patch 2 deduplicates write_sysctl in BPF test_progs.
Patch 3 adds the tests for this hook.
v1 -> v2:
* Added static to bindaddr_v6 in prog_tests/connect_ping.c
* Deduplicated much of the test logic in prog_tests/connect_ping.c
* Deduplicated write_sysctl() to test_progs.c
v2 -> v3:
* Renamed variable "obj" to "skel" for the BPF skeleton object in
prog_tests/connect_ping.c
v3 -> v4:
* Fixed error path to destroy skel in prog_tests/connect_ping.c
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
YiFei Zhu [Fri, 9 Sep 2022 00:49:41 +0000 (00:49 +0000)]
selftests/bpf: Ensure cgroup/connect{4,6} programs can bind unpriv ICMP ping
This tests that when an unprivileged ICMP ping socket connects,
the hooks are actually invoked. We also ensure that if the hook does
not call bpf_bind(), the bound address is unmodified, and if the
hook calls bpf_bind(), the bound address is exactly what we provided
to the helper.
A new netns is used to enable ping_group_range in the test without
affecting ouside of the test, because by default, not even root is
permitted to use unprivileged ICMP ping...
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Link: https://lore.kernel.org/r/086b227c1b97f4e94193e58aae7576d0261b68a4.1662682323.git.zhuyifei@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
YiFei Zhu [Fri, 9 Sep 2022 00:49:40 +0000 (00:49 +0000)]
selftests/bpf: Deduplicate write_sysctl() to test_progs.c
This helper is needed in multiple tests. Instead of copying it over
and over, better to deduplicate this helper to test_progs.c.
test_progs.c is chosen over testing_helpers.c because of this helper's
use of CHECK / ASSERT_*, and the CHECK was modified to use ASSERT_*
so it does not rely on a duration variable.
Suggested-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Link: https://lore.kernel.org/r/9b4fc9a27bd52f771b657b4c4090fc8d61f3a6b5.1662682323.git.zhuyifei@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
YiFei Zhu [Fri, 9 Sep 2022 00:49:39 +0000 (00:49 +0000)]
bpf: Invoke cgroup/connect{4,6} programs for unprivileged ICMP ping
Usually when a TCP/UDP connection is initiated, we can bind the socket
to a specific IP attached to an interface in a cgroup/connect hook.
But for pings, this is impossible, as the hook is not being called.
This adds the hook invocation to unprivileged ICMP ping (i.e. ping
sockets created with SOCK_DGRAM IPPROTO_ICMP(V6) as opposed to
SOCK_RAW. Logic is mirrored from UDP sockets where the hook is invoked
during pre_connect, after a check for suficiently sized addr_len.
Signed-off-by: YiFei Zhu <zhuyifei@google.com>
Link: https://lore.kernel.org/r/5764914c252fad4cd134fb6664c6ede95f409412.1662682323.git.zhuyifei@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Daniel Borkmann [Fri, 9 Sep 2022 14:15:11 +0000 (16:15 +0200)]
libbpf: Remove gcc support for bpf_tail_call_static for now
This reverts commit
14e5ce79943a ("libbpf: Add GCC support for
bpf_tail_call_static"). Reason is that gcc invented their own BPF asm
which is not conform with LLVM one, and going forward this would be
more painful to maintain here and in other areas of the library. Thus
remove it; ask to gcc folks is to align with LLVM one to use exact
same syntax.
Fixes:
14e5ce79943a ("libbpf: Add GCC support for bpf_tail_call_static")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: James Hilliard <james.hilliard1@gmail.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Kumar Kartikeya Dwivedi [Sun, 4 Sep 2022 20:41:28 +0000 (22:41 +0200)]
bpf: Add helper macro bpf_for_each_reg_in_vstate
For a lot of use cases in future patches, we will want to modify the
state of registers part of some same 'group' (e.g. same ref_obj_id). It
won't just be limited to releasing reference state, but setting a type
flag dynamically based on certain actions, etc.
Hence, we need a way to easily pass a callback to the function that
iterates over all registers in current bpf_verifier_state in all frames
upto (and including) the curframe.
While in C++ we would be able to easily use a lambda to pass state and
the callback together, sadly we aren't using C++ in the kernel. The next
best thing to avoid defining a function for each case seems like
statement expressions in GNU C. The kernel already uses them heavily,
hence they can passed to the macro in the style of a lambda. The
statement expression will then be substituted in the for loop bodies.
Variables __state and __reg are set to current bpf_func_state and reg
for each invocation of the expression inside the passed in verifier
state.
Then, convert mark_ptr_or_null_regs, clear_all_pkt_pointers,
release_reference, find_good_pkt_pointers, find_equal_scalars to
use bpf_for_each_reg_in_vstate.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220904204145.3089-16-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kumar Kartikeya Dwivedi [Sun, 4 Sep 2022 20:41:16 +0000 (22:41 +0200)]
bpf: Add zero_map_value to zero map value with special fields
We need this helper to skip over special fields (bpf_spin_lock,
bpf_timer, kptrs) while zeroing a map value. Use the same logic as
copy_map_value but memset instead of memcpy.
Currently, the code zeroing map value memory does not have to deal with
special fields, hence this is a prerequisite for introducing such
support.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220904204145.3089-4-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kumar Kartikeya Dwivedi [Sun, 4 Sep 2022 20:41:15 +0000 (22:41 +0200)]
bpf: Support kptrs in percpu arraymap
Enable support for kptrs in percpu BPF arraymap by wiring up the freeing
of these kptrs from percpu map elements.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220904204145.3089-3-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Kumar Kartikeya Dwivedi [Sun, 4 Sep 2022 20:41:14 +0000 (22:41 +0200)]
bpf: Add copy_map_value_long to copy to remote percpu memory
bpf_long_memcpy is used while copying to remote percpu regions from BPF
syscall and helpers, so that the copy is atomic at word size
granularity.
This might not be possible when you copy from map value hosting kptrs
from or to percpu maps, as the alignment or size in disjoint regions may
not be multiple of word size.
Hence, to avoid complicating the copy loop, we only use bpf_long_memcpy
when special fields are not present, otherwise use normal memcpy to copy
the disjoint regions.
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20220904204145.3089-2-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jules Irenge [Wed, 7 Sep 2022 15:24:20 +0000 (16:24 +0100)]
bpf: Fix resetting logic for unreferenced kptrs
Sparse reported a warning at bpf_map_free_kptrs()
"warning: Using plain integer as NULL pointer"
During the process of fixing this warning, it was discovered that the current
code erroneously writes to the pointer variable instead of deferencing and
writing to the actual kptr. Hence, Sparse tool accidentally helped to uncover
this problem. Fix this by doing WRITE_ONCE(*p, 0) instead of WRITE_ONCE(p, 0).
Note that the effect of this bug is that unreferenced kptrs will not be cleared
during check_and_free_fields. It is not a problem if the clearing is not done
during map_free stage, as there is nothing to free for them.
Fixes:
14a324f6a67e ("bpf: Wire up freeing of referenced kptr")
Signed-off-by: Jules Irenge <jbi.octave@gmail.com>
Link: https://lore.kernel.org/r/Yxi3pJaK6UDjVJSy@playground
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Wed, 7 Sep 2022 17:57:29 +0000 (10:57 -0700)]
Merge branch 'bpf-core changes for preparation of HID-bpf'
Benjamin Tissoires says:
====================
Hi,
well, given that the HID changes haven't moved a lot in the past
revisions and that I am cc-ing a bunch of people, I have dropped them
while we focus on the last 2 requirements in bpf-core changes.
I'll submit a HID targeted series when we get these in tree, which
would make things a lore more independent.
For reference, the whole reasons for these 2 main changes are at
https://lore.kernel.org/bpf/
20220902132938.2409206-1-benjamin.tissoires@redhat.com/
Compared to v10 (in addition of dropping the HID changes), I have
changed the selftests so we can test both light skeletons and libbbpf
calls.
Cheers,
Benjamin
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Benjamin Tissoires [Tue, 6 Sep 2022 15:13:03 +0000 (17:13 +0200)]
selftests/bpf: Add tests for kfunc returning a memory pointer
We add 2 new kfuncs that are following the RET_PTR_TO_MEM
capability from the previous commit.
Then we test them in selftests:
the first tests are testing valid case, and are not failing,
and the later ones are actually preventing the program to be loaded
because they are wrong.
To work around that, we mark the failing ones as not autoloaded
(with SEC("?tc")), and we manually enable them one by one, ensuring
the verifier rejects them.
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Link: https://lore.kernel.org/r/20220906151303.2780789-8-benjamin.tissoires@redhat.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>