platform/kernel/linux-rpi.git
3 years agobpf: bpf_fib_lookup return MTU value as output when looked up
Jesper Dangaard Brouer [Tue, 9 Feb 2021 13:38:19 +0000 (14:38 +0100)]
bpf: bpf_fib_lookup return MTU value as output when looked up

The BPF-helpers for FIB lookup (bpf_xdp_fib_lookup and bpf_skb_fib_lookup)
can perform MTU check and return BPF_FIB_LKUP_RET_FRAG_NEEDED. The BPF-prog
don't know the MTU value that caused this rejection.

If the BPF-prog wants to implement PMTU (Path MTU Discovery) (rfc1191) it
need to know this MTU value for the ICMP packet.

Patch change lookup and result struct bpf_fib_lookup, to contain this MTU
value as output via a union with 'tot_len' as this is the value used for
the MTU lookup.

V5:
 - Fixed uninit value spotted by Dan Carpenter.
 - Name struct output member mtu_result

Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/161287789952.790810.13134700381067698781.stgit@firesoul
3 years agobpf: Fix bpf_fib_lookup helper MTU check for SKB ctx
Jesper Dangaard Brouer [Tue, 9 Feb 2021 13:38:14 +0000 (14:38 +0100)]
bpf: Fix bpf_fib_lookup helper MTU check for SKB ctx

BPF end-user on Cilium slack-channel (Carlo Carraro) wants to use
bpf_fib_lookup for doing MTU-check, but *prior* to extending packet size,
by adjusting fib_params 'tot_len' with the packet length plus the expected
encap size. (Just like the bpf_check_mtu helper supports). He discovered
that for SKB ctx the param->tot_len was not used, instead skb->len was used
(via MTU check in is_skb_forwardable() that checks against netdev MTU).

Fix this by using fib_params 'tot_len' for MTU check. If not provided (e.g.
zero) then keep existing TC behaviour intact. Notice that 'tot_len' for MTU
check is done like XDP code-path, which checks against FIB-dst MTU.

V16:
- Revert V13 optimization, 2nd lookup is against egress/resulting netdev

V13:
- Only do ifindex lookup one time, calling dev_get_by_index_rcu().

V10:
- Use same method as XDP for 'tot_len' MTU check

Fixes: 4c79579b44b1 ("bpf: Change bpf_fib_lookup to return lookup status")
Reported-by: Carlo Carraro <colrack@gmail.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/161287789444.790810.15247494756551413508.stgit@firesoul
3 years agobpf: Remove MTU check in __bpf_skb_max_len
Jesper Dangaard Brouer [Tue, 9 Feb 2021 13:38:09 +0000 (14:38 +0100)]
bpf: Remove MTU check in __bpf_skb_max_len

Multiple BPF-helpers that can manipulate/increase the size of the SKB uses
__bpf_skb_max_len() as the max-length. This function limit size against
the current net_device MTU (skb->dev->mtu).

When a BPF-prog grow the packet size, then it should not be limited to the
MTU. The MTU is a transmit limitation, and software receiving this packet
should be allowed to increase the size. Further more, current MTU check in
__bpf_skb_max_len uses the MTU from ingress/current net_device, which in
case of redirects uses the wrong net_device.

This patch keeps a sanity max limit of SKB_MAX_ALLOC (16KiB). The real limit
is elsewhere in the system. Jesper's testing[1] showed it was not possible
to exceed 8KiB when expanding the SKB size via BPF-helper. The limiting
factor is the define KMALLOC_MAX_CACHE_SIZE which is 8192 for
SLUB-allocator (CONFIG_SLUB) in-case PAGE_SIZE is 4096. This define is
in-effect due to this being called from softirq context see code
__gfp_pfmemalloc_flags() and __do_kmalloc_node(). Jakub's testing showed
that frames above 16KiB can cause NICs to reset (but not crash). Keep this
sanity limit at this level as memory layer can differ based on kernel
config.

[1] https://github.com/xdp-project/bpf-examples/tree/master/MTU-tests

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/161287788936.790810.2937823995775097177.stgit@firesoul
3 years agobpf, devmap: Use GFP_KERNEL for xdp bulk queue allocation
Jun'ichi Nomura [Tue, 9 Feb 2021 08:24:52 +0000 (08:24 +0000)]
bpf, devmap: Use GFP_KERNEL for xdp bulk queue allocation

The devmap bulk queue is allocated with GFP_ATOMIC and the allocation
may fail if there is no available space in existing percpu pool.

Since commit 75ccae62cb8d42 ("xdp: Move devmap bulk queue into struct net_device")
moved the bulk queue allocation to NETDEV_REGISTER callback, whose context
is allowed to sleep, use GFP_KERNEL instead of GFP_ATOMIC to let percpu
allocator extend the pool when needed and avoid possible failure of netdev
registration.

As the required alignment is natural, we can simply use alloc_percpu().

Fixes: 75ccae62cb8d42 ("xdp: Move devmap bulk queue into struct net_device")
Signed-off-by: Jun'ichi Nomura <junichi.nomura@nec.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/20210209082451.GA44021@jeru.linux.bs1.fc.nec.co.jp
3 years agobpf: Fix an unitialized value in bpf_iter
Yonghong Song [Fri, 12 Feb 2021 00:59:26 +0000 (16:59 -0800)]
bpf: Fix an unitialized value in bpf_iter

Commit 15d83c4d7cef ("bpf: Allow loading of a bpf_iter program")
cached btf_id in struct bpf_iter_target_info so later on
if it can be checked cheaply compared to checking registered names.

syzbot found a bug that uninitialized value may occur to
bpf_iter_target_info->btf_id. This is because we allocated
bpf_iter_target_info structure with kmalloc and never initialized
field btf_id afterwards. This uninitialized btf_id is typically
compared to a u32 bpf program func proto btf_id, and the chance
of being equal is extremely slim.

This patch fixed the issue by using kzalloc which will also
prevent future likely instances due to adding new fields.

Fixes: 15d83c4d7cef ("bpf: Allow loading of a bpf_iter program")
Reported-by: syzbot+580f4f2a272e452d55cb@syzkaller.appspotmail.com
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210212005926.2875002-1-yhs@fb.com
3 years agotools/resolve_btfids: Add /libbpf to .gitignore
Stanislav Fomichev [Fri, 12 Feb 2021 01:00:53 +0000 (17:00 -0800)]
tools/resolve_btfids: Add /libbpf to .gitignore

This is what I see after compiling the kernel:

 # bpf-next...bpf-next/master
 ?? tools/bpf/resolve_btfids/libbpf/

Fixes: fc6b48f692f8 ("tools/resolve_btfids: Build libbpf and libsubcmd in separate directories")
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210212010053.668700-1-sdf@google.com
3 years agoMerge branch 'introduce bpf_iter for task_vma'
Alexei Starovoitov [Fri, 12 Feb 2021 20:56:54 +0000 (12:56 -0800)]
Merge branch 'introduce bpf_iter for task_vma'

Song Liu says:

====================

This set introduces bpf_iter for task_vma, which can be used to generate
information similar to /proc/pid/maps. Patch 4/4 adds an example that
mimics /proc/pid/maps.

Current /proc/<pid>/maps and /proc/<pid>/smaps provide information of
vma's of a process. However, these information are not flexible enough to
cover all use cases. For example, if a vma cover mixed 2MB pages and 4kB
pages (x86_64), there is no easy way to tell which address ranges are
backed by 2MB pages. task_vma solves the problem by enabling the user to
generate customize information based on the vma (and vma->vm_mm,
vma->vm_file, etc.).

Changes v6 => v7:
  1. Let BPF iter program use bpf_d_path without specifying sleepable.
     (Alexei)

Changes v5 => v6:
  1. Add more comments for task_vma_seq_get_next() to explain the logic
     of find_vma() calls. (Alexei)
  2. Skip vma found by find_vma() when both vm_start and vm_end matches
     prev_vm_[start|end]. Previous versions only compares vm_start.
     IOW, if vma of [4k, 8k] is replaced by [4k, 12k] after relocking
     mmap_lock, v5 will skip the new vma, while v6 will process it.

Changes v4 => v5:
  1. Fix a refcount leak on task_struct. (Yonghong)
  2. Fix the selftest. (Yonghong)

Changes v3 => v4:
  1. Avoid skipping vma by assigning invalid prev_vm_start in
     task_vma_seq_stop(). (Yonghong)
  2. Move "again" label in task_vma_seq_get_next() save a check. (Yonghong)

Changes v2 => v3:
  1. Rewrite 1/4 so that we hold mmap_lock while calling BPF program. This
     enables the BPF program to access the real vma with BTF. (Alexei)
  2. Fix the logic when the control is returned to user space. (Yonghong)
  3. Revise commit log and cover letter. (Yonghong)

Changes v1 => v2:
  1. Small fixes in task_iter.c and the selftests. (Yonghong)
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 years agoselftests/bpf: Add test for bpf_iter_task_vma
Song Liu [Fri, 12 Feb 2021 18:31:07 +0000 (10:31 -0800)]
selftests/bpf: Add test for bpf_iter_task_vma

The test dumps information similar to /proc/pid/maps. The first line of
the output is compared against the /proc file to make sure they match.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210212183107.50963-4-songliubraving@fb.com
3 years agobpf: Allow bpf_d_path in bpf_iter program
Song Liu [Fri, 12 Feb 2021 18:31:06 +0000 (10:31 -0800)]
bpf: Allow bpf_d_path in bpf_iter program

task_file and task_vma iter programs have access to file->f_path. Enable
bpf_d_path to print paths of these file.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210212183107.50963-3-songliubraving@fb.com
3 years agobpf: Introduce task_vma bpf_iter
Song Liu [Fri, 12 Feb 2021 18:31:05 +0000 (10:31 -0800)]
bpf: Introduce task_vma bpf_iter

Introduce task_vma bpf_iter to print memory information of a process. It
can be used to print customized information similar to /proc/<pid>/maps.

Current /proc/<pid>/maps and /proc/<pid>/smaps provide information of
vma's of a process. However, these information are not flexible enough to
cover all use cases. For example, if a vma cover mixed 2MB pages and 4kB
pages (x86_64), there is no easy way to tell which address ranges are
backed by 2MB pages. task_vma solves the problem by enabling the user to
generate customize information based on the vma (and vma->vm_mm,
vma->vm_file, etc.).

To access the vma safely in the BPF program, task_vma iterator holds
target mmap_lock while calling the BPF program. If the mmap_lock is
contended, task_vma unlocks mmap_lock between iterations to unblock the
writer(s). This lock contention avoidance mechanism is similar to the one
used in show_smaps_rollup().

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20210212183107.50963-2-songliubraving@fb.com
3 years agobpf: selftests: Add non function pointer test to struct_ops
Martin KaFai Lau [Fri, 12 Feb 2021 02:10:37 +0000 (18:10 -0800)]
bpf: selftests: Add non function pointer test to struct_ops

This patch adds a "void *owner" member.  The existing
bpf_tcp_ca test will ensure the bpf_cubic.o and bpf_dctcp.o
can be loaded.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210212021037.267278-1-kafai@fb.com
3 years agolibbpf: Ignore non function pointer member in struct_ops
Martin KaFai Lau [Fri, 12 Feb 2021 02:10:30 +0000 (18:10 -0800)]
libbpf: Ignore non function pointer member in struct_ops

When libbpf initializes the kernel's struct_ops in
"bpf_map__init_kern_struct_ops()", it enforces all
pointer types must be a function pointer and rejects
others.  It turns out to be too strict.  For example,
when directly using "struct tcp_congestion_ops" from vmlinux.h,
it has a "struct module *owner" member and it is set to NULL
in a bpf_tcp_cc.o.

Instead, it only needs to ensure the member is a function
pointer if it has been set (relocated) to a bpf-prog.
This patch moves the "btf_is_func_proto(kern_mtype)" check
after the existing "if (!prog) { continue; }".  The original debug
message in "if (!prog) { continue; }" is also removed since it is
no longer valid.  Beside, there is a later debug message to tell
which function pointer is set.

The "btf_is_func_proto(mtype)" has already been guaranteed
in "bpf_object__collect_st_ops_relos()" which has been run
before "bpf_map__init_kern_struct_ops()".  Thus, this check
is removed.

v2:
- Remove outdated debug message (Andrii)
  Remove because there is a later debug message to tell
  which function pointer is set.
- Following mtype->type is no longer needed. Remove:
  "skip_mods_and_typedefs(btf, mtype->type, &mtype_id)"
- Do "if (!prog)" test before skip_mods_and_typedefs.

Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support")
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210212021030.266932-1-kafai@fb.com
3 years agolibbpf: Use AF_LOCAL instead of AF_INET in xsk.c
Stanislav Fomichev [Tue, 9 Feb 2021 22:18:26 +0000 (14:18 -0800)]
libbpf: Use AF_LOCAL instead of AF_INET in xsk.c

We have the environments where usage of AF_INET is prohibited
(cgroup/sock_create returns EPERM for AF_INET). Let's use
AF_LOCAL instead of AF_INET, it should perfectly work with SIOCETHTOOL.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Link: https://lore.kernel.org/bpf/20210209221826.922940-1-sdf@google.com
3 years agobpf: Fix subreg optimization for BPF_FETCH
Ilya Leoshkevich [Wed, 10 Feb 2021 20:45:02 +0000 (21:45 +0100)]
bpf: Fix subreg optimization for BPF_FETCH

All 32-bit variants of BPF_FETCH (add, and, or, xor, xchg, cmpxchg)
define a 32-bit subreg and thus have zext_dst set. Their encoding,
however, uses dst_reg field as a base register, which causes
opt_subreg_zext_lo32_rnd_hi32() to zero-extend said base register
instead of the one the insn really defines (r0 or src_reg).

Fix by properly choosing a register being defined, similar to how
check_atomic() already does that.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210210204502.83429-1-iii@linux.ibm.com
3 years agodocs: bpf: Clarify BPF_CMPXCHG wording
Ilya Leoshkevich [Wed, 10 Feb 2021 14:28:52 +0000 (15:28 +0100)]
docs: bpf: Clarify BPF_CMPXCHG wording

Based on [1], BPF_CMPXCHG should always load the old value into R0. The
phrasing in bpf.rst is somewhat ambiguous in this regard, improve it to
make this aspect crystal clear.

  [1] https://lore.kernel.org/bpf/CAADnVQJFcFwxEz=wnV=hkie-EDwa8s5JGbBQeFt1TGux1OihJw@mail.gmail.com/

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210210142853.82203-1-iii@linux.ibm.com
3 years agobpf: Clear per_cpu pointers during bpf_prog_realloc
Alexei Starovoitov [Fri, 12 Feb 2021 03:35:00 +0000 (19:35 -0800)]
bpf: Clear per_cpu pointers during bpf_prog_realloc

bpf_prog_realloc copies contents of struct bpf_prog.
The pointers have to be cleared before freeing old struct.

Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
Fixes: 700d4796ef59 ("bpf: Optimize program stats")
Fixes: ca06f55b9002 ("bpf: Add per-program recursion prevention mechanism")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 years agoselftests/bpf: Add a selftest for the tracing bpf_get_socket_cookie
Florent Revest [Wed, 10 Feb 2021 11:14:06 +0000 (12:14 +0100)]
selftests/bpf: Add a selftest for the tracing bpf_get_socket_cookie

This builds up on the existing socket cookie test which checks whether
the bpf_get_socket_cookie helpers provide the same value in
cgroup/connect6 and sockops programs for a socket created by the
userspace part of the test.

Instead of having an update_cookie sockops program tag a socket local
storage with 0xFF, this uses both an update_cookie_sockops program and
an update_cookie_tracing program which succesively tag the socket with
0x0F and then 0xF0.

Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-5-revest@chromium.org
3 years agoselftests/bpf: Use vmlinux.h in socket_cookie_prog.c
Florent Revest [Wed, 10 Feb 2021 11:14:05 +0000 (12:14 +0100)]
selftests/bpf: Use vmlinux.h in socket_cookie_prog.c

When migrating from the bpf.h's to the vmlinux.h's definition of struct
bps_sock, an interesting LLVM behavior happened. LLVM started producing
two fetches of ctx->sk in the sockops program this means that the
verifier could not keep track of the NULL-check on ctx->sk. Therefore,
we need to extract ctx->sk in a variable before checking and
dereferencing it.

Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-4-revest@chromium.org
3 years agoselftests/bpf: Integrate the socket_cookie test to test_progs
Florent Revest [Wed, 10 Feb 2021 11:14:04 +0000 (12:14 +0100)]
selftests/bpf: Integrate the socket_cookie test to test_progs

Currently, the selftest for the BPF socket_cookie helpers is built and
run independently from test_progs. It's easy to forget and hard to
maintain.

This patch moves the socket cookies test into prog_tests/ and vastly
simplifies its logic by:
- rewriting the loading code with BPF skeletons
- rewriting the server/client code with network helpers
- rewriting the cgroup code with test__join_cgroup
- rewriting the error handling code with CHECKs

Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-3-revest@chromium.org
3 years agobpf: Expose bpf_get_socket_cookie to tracing programs
Florent Revest [Wed, 10 Feb 2021 11:14:03 +0000 (12:14 +0100)]
bpf: Expose bpf_get_socket_cookie to tracing programs

This needs a new helper that:
- can work in a sleepable context (using sock_gen_cookie)
- takes a struct sock pointer and checks that it's not NULL

Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-2-revest@chromium.org
3 years agobpf: Be less specific about socket cookies guarantees
Florent Revest [Wed, 10 Feb 2021 11:14:02 +0000 (12:14 +0100)]
bpf: Be less specific about socket cookies guarantees

Since "92acdc58ab11 bpf, net: Rework cookie generator as per-cpu one"
socket cookies are not guaranteed to be non-decreasing. The
bpf_get_socket_cookie helper descriptions are currently specifying that
cookies are non-decreasing but we don't want users to rely on that.

Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210111406.785541-1-revest@chromium.org
3 years agokbuild: Do not clean resolve_btfids if the output does not exist
Jiri Olsa [Thu, 11 Feb 2021 12:40:04 +0000 (13:40 +0100)]
kbuild: Do not clean resolve_btfids if the output does not exist

Nathan reported issue with cleaning empty build directory:

  $ make -s O=build distclean
  ../../scripts/Makefile.include:4: *** \
  O=/ho...build/tools/bpf/resolve_btfids does not exist.  Stop.

The problem that tools scripts require existing output
directory, otherwise it fails.

Adding check around the resolve_btfids clean target to
ensure the output directory is in place.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/bpf/20210211124004.1144344-1-jolsa@kernel.org
3 years agoselftests/bpf: Add a test for map-in-map and per-cpu maps in sleepable progs
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:34 +0000 (19:36 -0800)]
selftests/bpf: Add a test for map-in-map and per-cpu maps in sleepable progs

Add a basic test for map-in-map and per-cpu maps in sleepable programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-10-alexei.starovoitov@gmail.com
3 years agobpf: Allows per-cpu maps and map-in-map in sleepable programs
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:33 +0000 (19:36 -0800)]
bpf: Allows per-cpu maps and map-in-map in sleepable programs

Since sleepable programs are now executing under migrate_disable
the per-cpu maps are safe to use.
The map-in-map were ok to use in sleepable from the time sleepable
progs were introduced.

Note that non-preallocated maps are still not safe, since there is
no rcu_read_lock yet in sleepable programs and dynamically allocated
map elements are relying on rcu protection. The sleepable programs
have rcu_read_lock_trace instead. That limitation will be addresses
in the future.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-9-alexei.starovoitov@gmail.com
3 years agoselftests/bpf: Improve recursion selftest
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:32 +0000 (19:36 -0800)]
selftests/bpf: Improve recursion selftest

Since recursion_misses counter is available in bpf_prog_info
improve the selftest to make sure it's counting correctly.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210210033634.62081-8-alexei.starovoitov@gmail.com
3 years agobpf: Count the number of times recursion was prevented
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:31 +0000 (19:36 -0800)]
bpf: Count the number of times recursion was prevented

Add per-program counter for number of times recursion prevention mechanism
was triggered and expose it via show_fdinfo and bpf_prog_info.
Teach bpftool to print it.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-7-alexei.starovoitov@gmail.com
3 years agoselftest/bpf: Add a recursion test
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:30 +0000 (19:36 -0800)]
selftest/bpf: Add a recursion test

Add recursive non-sleepable fentry program as a test.
All attach points where sleepable progs can execute are non recursive so far.
The recursion protection mechanism for sleepable cannot be activated yet.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-6-alexei.starovoitov@gmail.com
3 years agobpf: Add per-program recursion prevention mechanism
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:29 +0000 (19:36 -0800)]
bpf: Add per-program recursion prevention mechanism

Since both sleepable and non-sleepable programs execute under migrate_disable
add recursion prevention mechanism to both types of programs when they're
executed via bpf trampoline.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-5-alexei.starovoitov@gmail.com
3 years agobpf: Compute program stats for sleepable programs
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:28 +0000 (19:36 -0800)]
bpf: Compute program stats for sleepable programs

Since sleepable programs don't migrate from the cpu the excution stats can be
computed for them as well. Reuse the same infrastructure for both sleepable and
non-sleepable programs.

run_cnt     -> the number of times the program was executed.
run_time_ns -> the program execution time in nanoseconds including the
               off-cpu time when the program was sleeping.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-4-alexei.starovoitov@gmail.com
3 years agobpf: Run sleepable programs with migration disabled
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:27 +0000 (19:36 -0800)]
bpf: Run sleepable programs with migration disabled

In older non-RT kernels migrate_disable() was the same as preempt_disable().
Since commit 74d862b682f5 ("sched: Make migrate_disable/enable() independent of RT")
migrate_disable() is real and doesn't prevent sleeping.

Running sleepable programs with migration disabled allows to add support for
program stats and per-cpu maps later.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: KP Singh <kpsingh@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-3-alexei.starovoitov@gmail.com
3 years agobpf: Optimize program stats
Alexei Starovoitov [Wed, 10 Feb 2021 03:36:26 +0000 (19:36 -0800)]
bpf: Optimize program stats

Move bpf_prog_stats from prog->aux into prog to avoid one extra load
in critical path of program execution.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210033634.62081-2-alexei.starovoitov@gmail.com
3 years agobpf_lru_list: Read double-checked variable once without lock
Marco Elver [Tue, 9 Feb 2021 11:27:01 +0000 (12:27 +0100)]
bpf_lru_list: Read double-checked variable once without lock

For double-checked locking in bpf_common_lru_push_free(), node->type is
read outside the critical section and then re-checked under the lock.
However, concurrent writes to node->type result in data races.

For example, the following concurrent access was observed by KCSAN:

  write to 0xffff88801521bc22 of 1 bytes by task 10038 on cpu 1:
   __bpf_lru_node_move_in        kernel/bpf/bpf_lru_list.c:91
   __local_list_flush            kernel/bpf/bpf_lru_list.c:298
   ...
  read to 0xffff88801521bc22 of 1 bytes by task 10043 on cpu 0:
   bpf_common_lru_push_free      kernel/bpf/bpf_lru_list.c:507
   bpf_lru_push_free             kernel/bpf/bpf_lru_list.c:555
   ...

Fix the data races where node->type is read outside the critical section
(for double-checked locking) by marking the access with READ_ONCE() as
well as ensuring the variable is only accessed once.

Fixes: 3a08c2fd7634 ("bpf: LRU List")
Reported-by: syzbot+3536db46dfa58c573458@syzkaller.appspotmail.com
Reported-by: syzbot+516acdb03d3e27d91bcd@syzkaller.appspotmail.com
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210209112701.3341724-1-elver@google.com
3 years agoselftests/bpf: Simplify the calculation of variables
Jiapeng Chong [Tue, 9 Feb 2021 08:46:38 +0000 (16:46 +0800)]
selftests/bpf: Simplify the calculation of variables

Fix the following coccicheck warnings:

./tools/testing/selftests/bpf/xdpxceiver.c:954:28-30: WARNING !A || A &&
B is equivalent to !A || B.

./tools/testing/selftests/bpf/xdpxceiver.c:932:28-30: WARNING !A || A &&
B is equivalent to !A || B.

./tools/testing/selftests/bpf/xdpxceiver.c:909:28-30: WARNING !A || A &&
B is equivalent to !A || B.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612860398-102839-1-git-send-email-jiapeng.chong@linux.alibaba.com
3 years agoselftests/bpf: Fix endianness issues in atomic tests
Ilya Leoshkevich [Wed, 10 Feb 2021 02:07:13 +0000 (03:07 +0100)]
selftests/bpf: Fix endianness issues in atomic tests

Atomic tests store a DW, but then load it back as a W from the same
address. This doesn't work on big-endian systems, and since the point
of those tests is not testing narrow loads, fix simply by loading a
DW.

Fixes: 98d666d05a1d ("bpf: Add tests for new BPF atomic operations")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210210020713.77911-1-iii@linux.ibm.com
3 years agoMerge branch 'allow variable-offset stack acces'
Alexei Starovoitov [Wed, 10 Feb 2021 18:44:19 +0000 (10:44 -0800)]
Merge branch 'allow variable-offset stack acces'

Andrei Matei says:

====================

Before this patch, variable offset access to the stack was dissalowed
for regular instructions, but was allowed for "indirect" accesses (i.e.
helpers). This patch removes the restriction, allowing reading and
writing to the stack through stack pointers with variable offsets. This
makes stack-allocated buffers more usable in programs, and brings stack
pointers closer to other types of pointers.

The motivation is being able to use stack-allocated buffers for data
manipulation. When the stack size limit is sufficient, allocating
buffers on the stack is simpler than per-cpu arrays, or other
alternatives.

V2 -> V3

- var-offset writes mark all the stack slots in range as initialized, so
  that future reads are not rejected.
- rewrote the C test to not use uprobes, as per Andrii's suggestion.
- addressed other review comments from Alexei.

V1 -> V2

- add support for var-offset stack writes, in addition to reads
- add a C test
- made variable offset direct reads no longer destroy spilled registers
  in the access range
- address review nits
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 years agoselftest/bpf: Add test for var-offset stack access
Andrei Matei [Sun, 7 Feb 2021 01:10:27 +0000 (20:10 -0500)]
selftest/bpf: Add test for var-offset stack access

Add a higher-level test (C BPF program) for the new functionality -
variable access stack reads and writes.

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-5-andreimatei1@gmail.com
3 years agoselftest/bpf: Verifier tests for var-off access
Andrei Matei [Sun, 7 Feb 2021 01:10:26 +0000 (20:10 -0500)]
selftest/bpf: Verifier tests for var-off access

Add tests for the new functionality - reading and writing to the stack
through a variable-offset pointer.

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-4-andreimatei1@gmail.com
3 years agoselftest/bpf: Adjust expected verifier errors
Andrei Matei [Sun, 7 Feb 2021 01:10:25 +0000 (20:10 -0500)]
selftest/bpf: Adjust expected verifier errors

The verifier errors around stack accesses have changed slightly in the
previous commit (generally for the better).

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-3-andreimatei1@gmail.com
3 years agobpf: Allow variable-offset stack access
Andrei Matei [Sun, 7 Feb 2021 01:10:24 +0000 (20:10 -0500)]
bpf: Allow variable-offset stack access

Before this patch, variable offset access to the stack was dissalowed
for regular instructions, but was allowed for "indirect" accesses (i.e.
helpers). This patch removes the restriction, allowing reading and
writing to the stack through stack pointers with variable offsets. This
makes stack-allocated buffers more usable in programs, and brings stack
pointers closer to other types of pointers.

The motivation is being able to use stack-allocated buffers for data
manipulation. When the stack size limit is sufficient, allocating
buffers on the stack is simpler than per-cpu arrays, or other
alternatives.

In unpriviledged programs, variable-offset reads and writes are
disallowed (they were already disallowed for the indirect access case)
because the speculative execution checking code doesn't support them.
Additionally, when writing through a variable-offset stack pointer, if
any pointers are in the accessible range, there's possilibities of later
leaking pointers because the write cannot be tracked precisely.

Writes with variable offset mark the whole range as initialized, even
though we don't know which stack slots are actually written. This is in
order to not reject future reads to these slots. Note that this doesn't
affect writes done through helpers; like before, helpers need the whole
stack range to be initialized to begin with.
All the stack slots are in range are considered scalars after the write;
variable-offset register spills are not tracked.

For reads, all the stack slots in the variable range needs to be
initialized (but see above about what writes do), otherwise the read is
rejected. All register spilled in stack slots that might be read are
marked as having been read, however reads through such pointers don't do
register filling; the target register will always be either a scalar or
a constant zero.

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210207011027.676572-2-andreimatei1@gmail.com
3 years agoMerge branch 'kbuild/resolve_btfids: Invoke resolve_btfids'
Andrii Nakryiko [Tue, 9 Feb 2021 05:21:39 +0000 (21:21 -0800)]
Merge branch 'kbuild/resolve_btfids: Invoke resolve_btfids'

Jiri Olsa says:

====================

hi,
resolve_btfids tool is used during the kernel build,
so we should clean it on kernel's make clean.

v2 changes:
  - add Song's acks on patches 1 and 4 (others changed) [Song]
  - add missing / [Andrii]
  - change srctree variable initialization [Andrii]
  - shifted ifdef for clean target [Andrii]

thanks,
jirka
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
3 years agokbuild: Add resolve_btfids clean to root clean target
Jiri Olsa [Fri, 5 Feb 2021 12:40:20 +0000 (13:40 +0100)]
kbuild: Add resolve_btfids clean to root clean target

The resolve_btfids tool is used during the kernel build,
so we should clean it on kernel's make clean.

Invoking the the resolve_btfids clean as part of root
'make clean'.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210205124020.683286-5-jolsa@kernel.org
3 years agotools/resolve_btfids: Set srctree variable unconditionally
Jiri Olsa [Fri, 5 Feb 2021 12:40:19 +0000 (13:40 +0100)]
tools/resolve_btfids: Set srctree variable unconditionally

We want this clean to be called from tree's root Makefile,
which defines same srctree variable and that will screw
the make setup.

We actually do not use srctree being passed from outside,
so we can solve this by setting current srctree value
directly.

Also changing the way how srctree is initialized as suggested
by Andrri.

Also root Makefile does not define the implicit RM variable,
so adding RM initialization.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-4-jolsa@kernel.org
3 years agotools/resolve_btfids: Check objects before removing
Jiri Olsa [Fri, 5 Feb 2021 12:40:18 +0000 (13:40 +0100)]
tools/resolve_btfids: Check objects before removing

We want this clean to be called from tree's root clean
and that one is silent if there's nothing to clean.

Adding check for all object to clean and display CLEAN
messages only if there are objects to remove.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-3-jolsa@kernel.org
3 years agotools/resolve_btfids: Build libbpf and libsubcmd in separate directories
Jiri Olsa [Fri, 5 Feb 2021 12:40:17 +0000 (13:40 +0100)]
tools/resolve_btfids: Build libbpf and libsubcmd in separate directories

Setting up separate build directories for libbpf and libpsubcmd,
so it's separated from other objects and we don't get them mixed
in the future.

It also simplifies cleaning, which is now simple rm -rf.

Also there's no need for FEATURE-DUMP.libbpf and bpf_helper_defs.h
files in .gitignore anymore.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210205124020.683286-2-jolsa@kernel.org
3 years agobpf: Simplify bool comparison
Jiapeng Chong [Mon, 8 Feb 2021 09:43:36 +0000 (17:43 +0800)]
bpf: Simplify bool comparison

Fix the following coccicheck warning:

./tools/bpf/bpf_dbg.c:893:32-36: WARNING: Comparison to bool.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612777416-34339-1-git-send-email-jiapeng.chong@linux.alibaba.com
3 years agoselftests/bpf: Add missing cleanup in atomic_bounds test
Brendan Jackman [Mon, 8 Feb 2021 12:37:37 +0000 (12:37 +0000)]
selftests/bpf: Add missing cleanup in atomic_bounds test

Add missing skeleton destroy call.

Fixes: 37086bfdc737 ("bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH")
Reported-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210208123737.963172-1-jackmanb@google.com
3 years agoselftests/bpf: Remove unneeded semicolon
Yang Li [Mon, 8 Feb 2021 10:30:13 +0000 (18:30 +0800)]
selftests/bpf: Remove unneeded semicolon

Eliminate the following coccicheck warning:
./tools/testing/selftests/bpf/test_flow_dissector.c:506:2-3: Unneeded
semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612780213-84583-1-git-send-email-yang.lee@linux.alibaba.com
3 years agobpf/benchs/bench_ringbufs: Remove unneeded semicolon
Yang Li [Sun, 7 Feb 2021 07:52:40 +0000 (15:52 +0800)]
bpf/benchs/bench_ringbufs: Remove unneeded semicolon

Eliminate the following coccicheck warning:
./tools/testing/selftests/bpf/benchs/bench_ringbufs.c:322:2-3: Unneeded
semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612684360-115910-1-git-send-email-yang.lee@linux.alibaba.com
3 years agobpf: Refactor BPF_PSEUDO_CALL checking as a helper function
Yonghong Song [Thu, 4 Feb 2021 23:48:27 +0000 (15:48 -0800)]
bpf: Refactor BPF_PSEUDO_CALL checking as a helper function

There is no functionality change. This refactoring intends
to facilitate next patch change with BPF_PSEUDO_FUNC.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210204234827.1628953-1-yhs@fb.com
3 years agoMerge branch 'BPF ring buffer + sleepable programs'
Andrii Nakryiko [Fri, 5 Feb 2021 00:33:56 +0000 (16:33 -0800)]
Merge branch 'BPF ring buffer + sleepable programs'

KP Singh says:

====================

- Use ring_buffer__consume without BPF_RB_FORCE_WAKEUP as suggested by
  Andrii
- Use ASSERT_OK_PTR macro

Sleepable programs currently do not have access to any ringbuffer and
since the perf ring buffer is a per-cpu map, it would not be trivial to
enable for sleepable programs. Our specific use-case is to use the
bpf_ima_inode_hash helper and write the hash to a ring buffer from a
sleepable LSM hook.

This series allows the BPF ringbuffer to be used in sleepable programs
(tracing and lsm). Since the helper prototypes were already exposed
the only change required was have the verifier allow
BPF_MAP_TYPE_RINGBUF for sleepable programs. The ima test is also
modified to use the ringbuffer instead of global variables.

Based on dicussions we had over the BPF office hours and enabling all
the possible debug options, I could not find any issues or warnings when
using the ring buffer from sleepable programs.
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
3 years agobpf/selftests: Update the IMA test to use BPF ring buffer
KP Singh [Thu, 4 Feb 2021 19:36:22 +0000 (19:36 +0000)]
bpf/selftests: Update the IMA test to use BPF ring buffer

Instead of using shared global variables between userspace and BPF, use
the ring buffer to send the IMA hash on the BPF ring buffer. This helps
in validating both IMA and the usage of the ringbuffer in sleepable
programs.

Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204193622.3367275-3-kpsingh@kernel.org
3 years agobpf: Allow usage of BPF ringbuffer in sleepable programs
KP Singh [Thu, 4 Feb 2021 19:36:21 +0000 (19:36 +0000)]
bpf: Allow usage of BPF ringbuffer in sleepable programs

The BPF ringbuffer map is pre-allocated and the implementation logic
does not rely on disabling preemption or per-cpu data structures. Using
the BPF ringbuffer sleepable LSM and tracing programs does not trigger
any warnings with DEBUG_ATOMIC_SLEEP, DEBUG_PREEMPT,
PROVE_RCU and PROVE_LOCKING and LOCKDEP enabled.

This allows helpers like bpf_copy_from_user and bpf_ima_inode_hash to
write to the BPF ring buffer from sleepable BPF programs.

Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204193622.3367275-2-kpsingh@kernel.org
3 years agoMerge branch 'BPF selftest helper script'
Andrii Nakryiko [Fri, 5 Feb 2021 00:03:17 +0000 (16:03 -0800)]
Merge branch 'BPF selftest helper script'

KP Singh says:

====================

# v4 -> v5

- Use %Y (modification time) instead of %W (creation time) of the local
  copy of the kernel config to check for newer upstream config.
- Rename the script to vmtest.sh

# v3 -> v4

- Fix logic for updating kernel config to not download the file
  if there are no upstream modifications and avoid extraneous
  kernel compilation as suggested by Andrii.
- This also removes the need for the -k flag.

# v2 -> v3

- Fixes to silence verbose commands
- Fixed output buffering without being teed out
- Fixed the clobbered error code of the script
- Other fixes suggested by Andrii

# v1 -> v2

- The script now compiles the kernel by default, and the -k option
  implies "keep the kernel"
- Pointer to the script in the docs.
- Some minor simplifications.

Allow developers and contributors to understand if their changes would
end up breaking the BPF CI and avoid the back and forth required for
fixing the test cases in the CI environment. The se
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
3 years agobpf/selftests: Add a short note about vmtest.sh in README.rst
KP Singh [Thu, 4 Feb 2021 19:45:44 +0000 (19:45 +0000)]
bpf/selftests: Add a short note about vmtest.sh in README.rst

Add a short note to make contributors aware of the existence of the
script. The documentation does not intentionally document all the
options of the script to avoid mentioning it in two places (it's
available in the usage / help message of the script).

Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-3-kpsingh@kernel.org
3 years agobpf: Helper script for running BPF presubmit tests
KP Singh [Thu, 4 Feb 2021 19:45:43 +0000 (19:45 +0000)]
bpf: Helper script for running BPF presubmit tests

The script runs the BPF selftests locally on the same kernel image
as they would run post submit in the BPF continuous integration
framework.

The goal of the script is to allow contributors to run selftests locally
in the same environment to check if their changes would end up breaking
the BPF CI and reduce the back-and-forth between the maintainers and the
developers.

Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Jiri Olsa <jolsa@redhat.com>
Link: https://lore.kernel.org/bpf/20210204194544.3383814-2-kpsingh@kernel.org
3 years agobpf: Emit explicit NULL pointer checks for PROBE_LDX instructions.
Alexei Starovoitov [Tue, 2 Feb 2021 05:38:37 +0000 (21:38 -0800)]
bpf: Emit explicit NULL pointer checks for PROBE_LDX instructions.

PTR_TO_BTF_ID registers contain either kernel pointer or NULL.
Emit the NULL check explicitly by JIT instead of going into
do_user_addr_fault() on NULL deference.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210202053837.95909-1-alexei.starovoitov@gmail.com
3 years agolibbpf: Stop using feature-detection Makefiles
Andrii Nakryiko [Wed, 3 Feb 2021 20:34:45 +0000 (12:34 -0800)]
libbpf: Stop using feature-detection Makefiles

Libbpf's Makefile relies on Linux tools infrastructure's feature detection
framework, but libbpf's needs are very modest: it detects the presence of
libelf and libz, both of which are mandatory. So it doesn't benefit much from
the framework, but pays significant costs in terms of maintainability and
debugging experience, when something goes wrong. The other feature detector,
testing for the presernce of minimal BPF API in system headers is long
obsolete as well, providing no value.

So stop using feature detection and just assume the presence of libelf and
libz during build time. Worst case, user will get a clear and actionable
linker error, e.g.:

  /usr/bin/ld: cannot find -lelf

On the other hand, we completely bypass recurring issues various users
reported over time with false negatives of feature detection (libelf or libz
not being detected, while they are actually present in the system).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Link: https://lore.kernel.org/bpf/20210203203445.3356114-1-andrii@kernel.org
3 years agonet, veth: Alloc skb in bulk for ndo_xdp_xmit
Lorenzo Bianconi [Fri, 29 Jan 2021 22:04:08 +0000 (23:04 +0100)]
net, veth: Alloc skb in bulk for ndo_xdp_xmit

Split ndo_xdp_xmit and ndo_start_xmit use cases in veth_xdp_rcv routine
in order to alloc skbs in bulk for XDP_PASS verdict.

Introduce xdp_alloc_skb_bulk utility routine to alloc skb bulk list.
The proposed approach has been tested in the following scenario:

eth (ixgbe) --> XDP_REDIRECT --> veth0 --> (remote-ns) veth1 --> XDP_PASS

XDP_REDIRECT: xdp_redirect_map bpf sample
XDP_PASS: xdp_rxq_info bpf sample

traffic generator: pkt_gen sending udp traffic on a remote device

bpf-next master: ~3.64Mpps
bpf-next + skb bulking allocation: ~3.79Mpps

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/a14a30d3c06fff24e13f836c733d80efc0bd6eb5.1611957532.git.lorenzo@kernel.org
3 years agoselftest/bpf: Testing for multiple logs on REJECT
Andrei Matei [Sat, 30 Jan 2021 22:01:50 +0000 (17:01 -0500)]
selftest/bpf: Testing for multiple logs on REJECT

This patch adds support to verifier tests to check for a succession of
verifier log messages on program load failure. This makes the errstr
field work uniformly across REJECT and VERBOSE_ACCEPT checks.

This patch also increases the maximum size of a message in the series of
messages to test from 80 chars to 200 chars. This is in order to keep
existing tests working, which sometimes test for messages larger than 80
chars (which was accepted in the REJECT case, when testing for a single
message, but not in the VERBOSE_ACCEPT case, when testing for possibly
multiple messages).

And example of such a long, checked message is in bounds.c: "R1 has
unknown scalar with mixed signed bounds, pointer arithmetic with it
prohibited for !root"

Signed-off-by: Andrei Matei <andreimatei1@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210130220150.59305-1-andreimatei1@gmail.com
3 years agosamples: bpf: Remove unneeded semicolon
Yang Li [Wed, 3 Feb 2021 03:17:28 +0000 (11:17 +0800)]
samples: bpf: Remove unneeded semicolon

Eliminate the following coccicheck warning:
./samples/bpf/cookie_uid_helper_example.c:316:3-4: Unneeded semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1612322248-35398-1-git-send-email-yang.lee@linux.alibaba.com
3 years agoselftests/bpf: Fix a compiler warning in local_storage test
KP Singh [Tue, 2 Feb 2021 21:37:30 +0000 (21:37 +0000)]
selftests/bpf: Fix a compiler warning in local_storage test

Some compilers trigger a warning when tmp_dir_path is allocated
with a fixed size of 64-bytes and used in the following snprintf:

  snprintf(tmp_exec_path, sizeof(tmp_exec_path), "%s/copy_of_rm",
   tmp_dir_path);

  warning: ‘/copy_of_rm’ directive output may be truncated writing 11
  bytes into a region of size between 1 and 64 [-Wformat-truncation=]

This is because it assumes that tmp_dir_path can be a maximum of 64
bytes long and, therefore, the end-result can get truncated. Fix it by
not using a fixed size in the initialization of tmp_dir_path which
allows the compiler to track actual size of the array better.

Fixes: 2f94ac191846 ("bpf: Update local storage test to check handling of null ptrs")
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210202213730.1906931-1-kpsingh@kernel.org
3 years agobpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH
Brendan Jackman [Tue, 2 Feb 2021 13:50:02 +0000 (13:50 +0000)]
bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH

When BPF_FETCH is set, atomic instructions load a value from memory
into a register. The current verifier code first checks via
check_mem_access whether we can access the memory, and then checks
via check_reg_arg whether we can write into the register.

For loads, check_reg_arg has the side-effect of marking the
register's value as unkonwn, and check_mem_access has the side effect
of propagating bounds from memory to the register. This currently only
takes effect for stack memory.

Therefore with the current order, bounds information is thrown away,
but by simply reversing the order of check_reg_arg
vs. check_mem_access, we can instead propagate bounds smartly.

A simple test is added with an infinite loop that can only be proved
unreachable if this propagation is present. This is implemented both
with C and directly in test_verifier using assembly.

Suggested-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210202135002.4024825-1-jackmanb@google.com
3 years agosamples/bpf: Add include dir for MIPS Loongson64 to fix build errors
Tiezhu Yang [Tue, 26 Jan 2021 14:05:25 +0000 (22:05 +0800)]
samples/bpf: Add include dir for MIPS Loongson64 to fix build errors

There exists many build errors when make M=samples/bpf on the Loongson
platform. This issue is MIPS related, x86 compiles just fine.

Here are some errors:

  CLANG-bpf  samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:19:
In file included from ./arch/mips/include/asm/barrier.h:11:
./arch/mips/include/asm/addrspace.h:13:10: fatal error: 'spaces.h' file not found
         ^~~~~~~~~~
1 error generated.

  CLANG-bpf  samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:22:
In file included from ./arch/mips/include/asm/cpu-features.h:13:
In file included from ./arch/mips/include/asm/cpu-info.h:15:
In file included from ./include/linux/cache.h:6:
./arch/mips/include/asm/cache.h:12:10: fatal error: 'kmalloc.h' file not found
         ^~~~~~~~~~~
1 error generated.

  CLANG-bpf  samples/bpf/sockex2_kern.o
In file included from samples/bpf/sockex2_kern.c:2:
In file included from ./include/uapi/linux/in.h:24:
In file included from ./include/linux/socket.h:8:
In file included from ./include/linux/uio.h:8:
In file included from ./include/linux/kernel.h:11:
In file included from ./include/linux/bitops.h:32:
In file included from ./arch/mips/include/asm/bitops.h:22:
./arch/mips/include/asm/cpu-features.h:15:10: fatal error: 'cpu-feature-overrides.h' file not found
         ^~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.

$ find arch/mips/include/asm -name spaces.h | sort
arch/mips/include/asm/mach-ar7/spaces.h
...
arch/mips/include/asm/mach-generic/spaces.h
...
arch/mips/include/asm/mach-loongson64/spaces.h
...
arch/mips/include/asm/mach-tx49xx/spaces.h

$ find arch/mips/include/asm -name kmalloc.h | sort
arch/mips/include/asm/mach-generic/kmalloc.h
arch/mips/include/asm/mach-ip32/kmalloc.h
arch/mips/include/asm/mach-tx49xx/kmalloc.h

$ find arch/mips/include/asm -name cpu-feature-overrides.h | sort
arch/mips/include/asm/mach-ath25/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-generic/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
...
arch/mips/include/asm/mach-tx49xx/cpu-feature-overrides.h

In the arch/mips/Makefile, there exists the following board-dependent
options:

include arch/mips/Kbuild.platforms
cflags-y += -I$(srctree)/arch/mips/include/asm/mach-generic

So we can do the similar things in samples/bpf/Makefile, just add
platform specific and generic include dir for MIPS Loongson64 to
fix the build errors.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1611669925-25315-1-git-send-email-yangtiezhu@loongson.cn
3 years agobpf: Simplify cases in bpf_base_func_proto
Tobias Klauser [Wed, 27 Jan 2021 17:46:15 +0000 (18:46 +0100)]
bpf: Simplify cases in bpf_base_func_proto

!perfmon_capable() is checked before the last switch(func_id) in
bpf_base_func_proto. Thus, the cases BPF_FUNC_trace_printk and
BPF_FUNC_snprintf_btf can be moved to that last switch(func_id) to omit
the inline !perfmon_capable() checks.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127174615.3038-1-tklauser@distanz.ch
3 years agobpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_RECVMSG
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:53 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_RECVMSG

Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on a locked socket.

Note that we could remove the switch for prog->expected_attach_type altogether
since all current sock_addr attach types are covered. However, it makes sense
to keep it as a safe-guard in case new sock_addr attach types are added that
might not operate on a locked socket. Therefore, avoid to let this slip through.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-5-sdf@google.com
3 years agoselftests/bpf: Rewrite recvmsg{4,6} asm progs to c in test_sock_addr
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:52 +0000 (15:28 -0800)]
selftests/bpf: Rewrite recvmsg{4,6} asm progs to c in test_sock_addr

I'll extend them in the next patch. It's easier to work with C
than with asm.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-4-sdf@google.com
3 years agobpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_INET{4,6}_GET{PEER,SOCK}NAME
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:51 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_INET{4,6}_GET{PEER,SOCK}NAME

Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on
a locked socket.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-3-sdf@google.com
3 years agobpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_SENDMSG
Stanislav Fomichev [Wed, 27 Jan 2021 23:28:50 +0000 (15:28 -0800)]
bpf: Enable bpf_{g,s}etsockopt in BPF_CGROUP_UDP{4,6}_SENDMSG

Can be used to query/modify socket state for unconnected UDP sendmsg.
Those hooks run as BPF_CGROUP_RUN_SA_PROG_LOCK and operate on
a locked socket.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210127232853.3753823-2-sdf@google.com
3 years agotools: Factor Clang, LLC and LLVM utils definitions
Sedat Dilek [Thu, 28 Jan 2021 01:50:58 +0000 (02:50 +0100)]
tools: Factor Clang, LLC and LLVM utils definitions

When dealing with BPF/BTF/pahole and DWARF v5 I wanted to build bpftool.

While looking into the source code I found duplicate assignments in misc tools
for the LLVM eco system, e.g. clang and llvm-objcopy.

Move the Clang, LLC and/or LLVM utils definitions to tools/scripts/Makefile.include
file and add missing includes where needed. Honestly, I was inspired by the commit
c8a950d0d3b9 ("tools: Factor HOSTCC, HOSTLD, HOSTAR definitions").

I tested with bpftool and perf on Debian/testing AMD64 and LLVM/Clang v11.1.0-rc1.

Build instructions:

[ make and make-options ]
MAKE="make V=1"
MAKE_OPTS="HOSTCC=clang HOSTCXX=clang++ HOSTLD=ld.lld CC=clang LD=ld.lld LLVM=1 LLVM_IAS=1"
MAKE_OPTS="$MAKE_OPTS PAHOLE=/opt/pahole/bin/pahole"

[ clean-up ]
$MAKE $MAKE_OPTS -C tools/ clean

[ bpftool ]
$MAKE $MAKE_OPTS -C tools/bpf/bpftool/

[ perf ]
PYTHON=python3 $MAKE $MAKE_OPTS -C tools/perf/

I was careful with respecting the user's wish to override custom compiler, linker,
GNU/binutils and/or LLVM utils settings.

Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com> # tools/build and tools/perf
Link: https://lore.kernel.org/bpf/20210128015117.20515-1-sedat.dilek@gmail.com
3 years agoselftests/bpf: Verify that rebinding to port < 1024 from BPF works
Stanislav Fomichev [Wed, 27 Jan 2021 19:31:40 +0000 (11:31 -0800)]
selftests/bpf: Verify that rebinding to port < 1024 from BPF works

Return 3 to indicate that permission check for port 111
should be skipped.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20210127193140.3170382-2-sdf@google.com
3 years agobpf: Allow rewriting to ports under ip_unprivileged_port_start
Stanislav Fomichev [Wed, 27 Jan 2021 19:31:39 +0000 (11:31 -0800)]
bpf: Allow rewriting to ports under ip_unprivileged_port_start

At the moment, BPF_CGROUP_INET{4,6}_BIND hooks can rewrite user_port
to the privileged ones (< ip_unprivileged_port_start), but it will
be rejected later on in the __inet_bind or __inet6_bind.

Let's add another return value to indicate that CAP_NET_BIND_SERVICE
check should be ignored. Use the same idea as we currently use
in cgroup/egress where bit #1 indicates CN. Instead, for
cgroup/bind{4,6}, bit #1 indicates that CAP_NET_BIND_SERVICE should
be bypassed.

v5:
- rename flags to be less confusing (Andrey Ignatov)
- rework BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY to work on flags
  and accept BPF_RET_SET_CN (no behavioral changes)

v4:
- Add missing IPv6 support (Martin KaFai Lau)

v3:
- Update description (Martin KaFai Lau)
- Fix capability restore in selftest (Martin KaFai Lau)

v2:
- Switch to explicit return code (Martin KaFai Lau)

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Andrey Ignatov <rdna@fb.com>
Link: https://lore.kernel.org/bpf/20210127193140.3170382-1-sdf@google.com
3 years agoskmsg: Make sk_psock_destroy() static
Cong Wang [Wed, 27 Jan 2021 22:15:01 +0000 (14:15 -0800)]
skmsg: Make sk_psock_destroy() static

sk_psock_destroy() is a RCU callback, I can't see any reason why
it could be used outside.

Signed-off-by: Cong Wang <cong.wang@bytedance.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Cc: Lorenz Bauer <lmb@cloudflare.com>
Link: https://lore.kernel.org/bpf/20210127221501.46866-1-xiyou.wangcong@gmail.com
3 years agobpf: Change 'BPF_ADD' to 'BPF_AND' in print_bpf_insn()
Menglong Dong [Wed, 27 Jan 2021 02:25:07 +0000 (18:25 -0800)]
bpf: Change 'BPF_ADD' to 'BPF_AND' in print_bpf_insn()

This 'BPF_ADD' is duplicated, and I belive it should be 'BPF_AND'.

Fixes: 981f94c3e921 ("bpf: Add bitwise atomic instructions")
Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Brendan Jackman <jackmanb@google.com>
Link: https://lore.kernel.org/bpf/20210127022507.23674-1-dong.menglong@zte.com.cn
3 years agoselftests/bpf: Don't exit on failed bpf_testmod unload
Andrii Nakryiko [Tue, 26 Jan 2021 06:50:18 +0000 (22:50 -0800)]
selftests/bpf: Don't exit on failed bpf_testmod unload

Fix bug in handling bpf_testmod unloading that will cause test_progs exiting
prematurely if bpf_testmod unloading failed. This is especially problematic
when running a subset of test_progs that doesn't require root permissions and
doesn't rely on bpf_testmod, yet will fail immediately due to exit(1) in
unload_bpf_testmod().

Fixes: 9f7fa225894c ("selftests/bpf: Add bpf_testmod kernel module for testing")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210126065019.1268027-1-andrii@kernel.org
3 years agosamples/bpf: Set flag __SANE_USERSPACE_TYPES__ for MIPS to fix build warnings
Tiezhu Yang [Mon, 25 Jan 2021 05:05:46 +0000 (13:05 +0800)]
samples/bpf: Set flag __SANE_USERSPACE_TYPES__ for MIPS to fix build warnings

There exists many build warnings when make M=samples/bpf on the Loongson
platform, this issue is MIPS related, x86 compiles just fine.

Here are some warnings:

  CC  samples/bpf/ibumad_user.o
samples/bpf/ibumad_user.c: In function ‘dump_counts’:
samples/bpf/ibumad_user.c:46:24: warning: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
    printf("0x%02x : %llu\n", key, value);
                     ~~~^          ~~~~~
                     %lu
  CC  samples/bpf/offwaketime_user.o
samples/bpf/offwaketime_user.c: In function ‘print_ksym’:
samples/bpf/offwaketime_user.c:34:17: warning: format ‘%llx’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
   printf("%s/%llx;", sym->name, addr);
              ~~~^               ~~~~
              %lx
samples/bpf/offwaketime_user.c: In function ‘print_stack’:
samples/bpf/offwaketime_user.c:68:17: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘__u64’ {aka ‘long unsigned int’} [-Wformat=]
  printf(";%s %lld\n", key->waker, count);
              ~~~^                 ~~~~~
              %ld

MIPS needs __SANE_USERSPACE_TYPES__ before <linux/types.h> to select
'int-ll64.h' in arch/mips/include/uapi/asm/types.h, then it can avoid
build warnings when printing __u64 with %llu, %llx or %lld.

The header tools/include/linux/types.h defines __SANE_USERSPACE_TYPES__,
it seems that we can include <linux/types.h> in the source files which
have build warnings, but it has no effect due to actually it includes
usr/include/linux/types.h instead of tools/include/linux/types.h, the
problem is that "usr/include" is preferred first than "tools/include"
in samples/bpf/Makefile, that sounds like a ugly hack to -Itools/include
before -Iusr/include.

So define __SANE_USERSPACE_TYPES__ for MIPS in samples/bpf/Makefile
is proper, if add "TPROGS_CFLAGS += -D__SANE_USERSPACE_TYPES__" in
samples/bpf/Makefile, it appears the following error:

Auto-detecting system features:
...                        libelf: [ on  ]
...                          zlib: [ on  ]
...                           bpf: [ OFF ]

BPF API too old
make[3]: *** [Makefile:293: bpfdep] Error 1
make[2]: *** [Makefile:156: all] Error 2

With #ifndef __SANE_USERSPACE_TYPES__  in tools/include/linux/types.h,
the above error has gone and this ifndef change does not hurt other
compilations.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/1611551146-14052-1-git-send-email-yangtiezhu@loongson.cn
3 years agotools, headers: Sync struct bpf_perf_event_data
Florian Lehner [Sat, 23 Jan 2021 18:52:21 +0000 (19:52 +0100)]
tools, headers: Sync struct bpf_perf_event_data

Update struct bpf_perf_event_data with the addr field to match the
tools headers with the kernel headers.

Signed-off-by: Florian Lehner <dev@der-flo.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210123185221.23946-1-dev@der-flo.net
3 years agoselftests/bpf: Avoid useless void *-casts
Björn Töpel [Fri, 22 Jan 2021 15:47:25 +0000 (16:47 +0100)]
selftests/bpf: Avoid useless void *-casts

There is no need to cast to void * when the argument is void *. Avoid
cluttering of code.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-13-bjorn.topel@gmail.com
3 years agoselftests/bpf: Consistent malloc/calloc usage
Björn Töpel [Fri, 22 Jan 2021 15:47:24 +0000 (16:47 +0100)]
selftests/bpf: Consistent malloc/calloc usage

Use calloc instead of malloc where it makes sense, and avoid C++-style
void *-cast.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-12-bjorn.topel@gmail.com
3 years agoselftests/bpf: Avoid heap allocation
Björn Töpel [Fri, 22 Jan 2021 15:47:23 +0000 (16:47 +0100)]
selftests/bpf: Avoid heap allocation

The data variable is only used locally. Instead of using the heap,
stick to using the stack.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-11-bjorn.topel@gmail.com
3 years agoselftests/bpf: Define local variables at the beginning of a block
Björn Töpel [Fri, 22 Jan 2021 15:47:22 +0000 (16:47 +0100)]
selftests/bpf: Define local variables at the beginning of a block

Use C89 rules for variable definition.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-10-bjorn.topel@gmail.com
3 years agoselftests/bpf: Change type from void * to struct generic_data *
Björn Töpel [Fri, 22 Jan 2021 15:47:21 +0000 (16:47 +0100)]
selftests/bpf: Change type from void * to struct generic_data *

Instead of casting from void *, let us use the actual type in
gen_udp_hdr().

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-9-bjorn.topel@gmail.com
3 years agoselftests/bpf: Change type from void * to struct ifaceconfigobj *
Björn Töpel [Fri, 22 Jan 2021 15:47:20 +0000 (16:47 +0100)]
selftests/bpf: Change type from void * to struct ifaceconfigobj *

Instead of casting from void *, let us use the actual type in
init_iface_config().

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-8-bjorn.topel@gmail.com
3 years agoselftests/bpf: Remove casting by introduce local variable
Björn Töpel [Fri, 22 Jan 2021 15:47:19 +0000 (16:47 +0100)]
selftests/bpf: Remove casting by introduce local variable

Let us use a local variable in nsswitchthread(), so we can remove a
lot of casting for better readability.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-7-bjorn.topel@gmail.com
3 years agoselftests/bpf: Improve readability of xdpxceiver/worker_pkt_validate()
Björn Töpel [Fri, 22 Jan 2021 15:47:18 +0000 (16:47 +0100)]
selftests/bpf: Improve readability of xdpxceiver/worker_pkt_validate()

Introduce a local variable to get rid of lot of casting. Move common
code outside the if/else-clause.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-6-bjorn.topel@gmail.com
3 years agoselftests/bpf: Remove memory leak
Björn Töpel [Fri, 22 Jan 2021 15:47:17 +0000 (16:47 +0100)]
selftests/bpf: Remove memory leak

The allocated entry is immediately overwritten by an assignment. Fix
that.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-5-bjorn.topel@gmail.com
3 years agoselftests/bpf: Fix style warnings
Björn Töpel [Fri, 22 Jan 2021 15:47:16 +0000 (16:47 +0100)]
selftests/bpf: Fix style warnings

Silence three checkpatch style warnings.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-4-bjorn.topel@gmail.com
3 years agoselftests/bpf: Remove unused enums
Björn Töpel [Fri, 22 Jan 2021 15:47:15 +0000 (16:47 +0100)]
selftests/bpf: Remove unused enums

The enums undef and bidi are not used. Remove them.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-3-bjorn.topel@gmail.com
3 years agoselftests/bpf: Remove a lot of ifobject casting
Björn Töpel [Fri, 22 Jan 2021 15:47:14 +0000 (16:47 +0100)]
selftests/bpf: Remove a lot of ifobject casting

Instead of passing void * all over the place, let us pass the actual
type (ifobject) and remove the void-ptr-to-type-ptr casting.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210122154725.22140-2-bjorn.topel@gmail.com
3 years agolibbpf, xsk: Select AF_XDP BPF program based on kernel version
Björn Töpel [Fri, 22 Jan 2021 10:53:51 +0000 (11:53 +0100)]
libbpf, xsk: Select AF_XDP BPF program based on kernel version

Add detection for kernel version, and adapt the BPF program based on
kernel support. This way, users will get the best possible performance
from the BPF program.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Marek Majtyka <alardam@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-4-bjorn.topel@gmail.com
3 years agoxsk: Fold xp_assign_dev and __xp_assign_dev
Björn Töpel [Fri, 22 Jan 2021 10:53:50 +0000 (11:53 +0100)]
xsk: Fold xp_assign_dev and __xp_assign_dev

Fold xp_assign_dev and __xp_assign_dev. The former directly calls the
latter.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-3-bjorn.topel@gmail.com
3 years agoxsk: Remove explicit_free parameter from __xsk_rcv()
Björn Töpel [Fri, 22 Jan 2021 10:53:49 +0000 (11:53 +0100)]
xsk: Remove explicit_free parameter from __xsk_rcv()

The explicit_free parameter of the __xsk_rcv() function was used to
mark whether the call was via the generic XDP or the native XDP
path. Instead of clutter the code with if-statements and "true/false"
parameters which are hard to understand, simply move the explicit free
to the __xsk_map_redirect() which is always called from the native XDP
path.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/bpf/20210122105351.11751-2-bjorn.topel@gmail.com
3 years agosamples/bpf: Add xdp program on egress for xdp_redirect_map
Hangbin Liu [Fri, 22 Jan 2021 02:50:07 +0000 (10:50 +0800)]
samples/bpf: Add xdp program on egress for xdp_redirect_map

This patch add a xdp program on egress to show that we can modify
the packet on egress. In this sample we will set the pkt's src
mac to egress's mac address. The xdp_prog will be attached when
-X option supplied.

Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20210122025007.2968381-1-liuhangbin@gmail.com
3 years agobpf: Fix typo in scalar{,32}_min_max_rsh comments
Tobias Klauser [Thu, 21 Jan 2021 17:43:24 +0000 (18:43 +0100)]
bpf: Fix typo in scalar{,32}_min_max_rsh comments

s/bounts/bounds/

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210121174324.24127-1-tklauser@distanz.ch
3 years agobpf, docs: Update build procedure for manually compiling LLVM and Clang
Tiezhu Yang [Fri, 22 Jan 2021 01:39:44 +0000 (09:39 +0800)]
bpf, docs: Update build procedure for manually compiling LLVM and Clang

The current LLVM and Clang build procedure in samples/bpf/README.rst is
out of date. See below that the links are not accessible any more.

  $ git clone http://llvm.org/git/llvm.git
  Cloning into 'llvm'...
  fatal: unable to access 'http://llvm.org/git/llvm.git/': Maximum (20) redirects followed
  $ git clone --depth 1 http://llvm.org/git/clang.git
  Cloning into 'clang'...
  fatal: unable to access 'http://llvm.org/git/clang.git/': Maximum (20) redirects followed

The LLVM community has adopted new ways to build the compiler. There are
different ways to build LLVM and Clang, the Clang Getting Started page [1]
has one way. As Yonghong said, it is better to copy the build procedure
in Documentation/bpf/bpf_devel_QA.rst to keep consistent.

I verified the procedure and it is proved to be feasible, so we should
update README.rst to reflect the reality. At the same time, update the
related comment in Makefile.

Additionally, as Fangrui said, the dir llvm-project/llvm/build/install is
not used, BUILD_SHARED_LIBS=OFF is the default option [2], so also change
Documentation/bpf/bpf_devel_QA.rst together.

At last, we recommend that developers who want the fastest incremental
builds use the Ninja build system [1], you can find it in your system's
package manager, usually the package is ninja or ninja-build [3], so add
ninja to build dependencies suggested by Nathan.

  [1] https://clang.llvm.org/get_started.html
  [2] https://www.llvm.org/docs/CMake.html
  [3] https://github.com/ninja-build/ninja/wiki/Pre-built-Ninja-packages

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Cc: Fangrui Song <maskray@google.com>
Link: https://lore.kernel.org/bpf/1611279584-26047-1-git-send-email-yangtiezhu@loongson.cn
3 years agoselftest/bpf: Fix typo
Junlin Yang [Thu, 21 Jan 2021 12:23:09 +0000 (20:23 +0800)]
selftest/bpf: Fix typo

Change 'exeeds' to 'exceeds'.

Signed-off-by: Junlin Yang <yangjunlin@yulong.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210121122309.1501-1-angkery@163.com
3 years agolibbpf: Use string table index from index table if needed
Jiri Olsa [Thu, 21 Jan 2021 20:22:03 +0000 (21:22 +0100)]
libbpf: Use string table index from index table if needed

For very large ELF objects (with many sections), we could
get special value SHN_XINDEX (65535) for elf object's string
table index - e_shstrndx.

Call elf_getshdrstrndx to get the proper string table index,
instead of reading it directly from ELF header.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210121202203.9346-4-jolsa@kernel.org
3 years agodocs: bpf: Clarify -mcpu=v3 requirement for atomic ops
Brendan Jackman [Wed, 20 Jan 2021 13:39:46 +0000 (13:39 +0000)]
docs: bpf: Clarify -mcpu=v3 requirement for atomic ops

Alexei pointed out [1] that this wording is pretty confusing. Here's
an attempt to be more explicit and clear.

[1] https://lore.kernel.org/bpf/CAADnVQJVvwoZsE1K+6qRxzF7+6CvZNzygnoBW9tZNWJELk5c=Q@mail.gmail.com/

Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20210120133946.2107897-3-jackmanb@google.com
3 years agodocs: bpf: Fixup atomics markup
Brendan Jackman [Wed, 20 Jan 2021 13:39:45 +0000 (13:39 +0000)]
docs: bpf: Fixup atomics markup

This fixes up the markup to fix a warning, be more consistent with
use of monospace, and use the correct .rst syntax for <em> (* instead
of _).

Signed-off-by: Brendan Jackman <jackmanb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Link: https://lore.kernel.org/bpf/20210120133946.2107897-2-jackmanb@google.com
3 years agoMerge branch 'bpf: misc performance improvements for cgroup'
Alexei Starovoitov [Wed, 20 Jan 2021 22:23:00 +0000 (14:23 -0800)]
Merge branch 'bpf: misc performance improvements for cgroup'

Stanislav Fomichev says:

====================

First patch adds custom getsockopt for TCP_ZEROCOPY_RECEIVE
to remove kmalloc and lock_sock overhead from the dat path.

Second patch removes kzalloc/kfree from getsockopt for the common cases.

Third patch switches cgroup_bpf_enabled to be per-attach to
to add only overhead for the cgroup attach types used on the system.

No visible user-side changes.

v9:
- include linux/tcp.h instead of netinet/tcp.h in sockopt_sk.c
- note that v9 depends on the commit 4be34f3d0731 ("bpf: Don't leak
  memory in bpf getsockopt when optlen == 0") from bpf tree

v8:
- add bpi.h to tools/include/uapi in the same patch (Martin KaFai Lau)
- kmalloc instead of kzalloc when exporting buffer (Martin KaFai Lau)
- note that v8 depends on the commit 4be34f3d0731 ("bpf: Don't leak
  memory in bpf getsockopt when optlen == 0") from bpf tree

v7:
- add comment about buffer contents for retval != 0 (Martin KaFai Lau)
- export tcp.h into tools/include/uapi (Martin KaFai Lau)
- note that v7 depends on the commit 4be34f3d0731 ("bpf: Don't leak
  memory in bpf getsockopt when optlen == 0") from bpf tree

v6:
- avoid indirect cost for new bpf_bypass_getsockopt (Eric Dumazet)

v5:
- reorder patches to reduce the churn (Martin KaFai Lau)

v4:
- update performance numbers
- bypass_bpf_getsockopt (Martin KaFai Lau)

v3:
- remove extra newline, add comment about sizeof tcp_zerocopy_receive
  (Martin KaFai Lau)
- add another patch to remove lock_sock overhead from
  TCP_ZEROCOPY_RECEIVE; technically, this makes patch #1 obsolete,
  but I'd still prefer to keep it to help with other socket
  options

v2:
- perf numbers for getsockopt kmalloc reduction (Song Liu)
- (sk) in BPF_CGROUP_PRE_CONNECT_ENABLED (Song Liu)
- 128 -> 64 buffer size, BUILD_BUG_ON (Martin KaFai Lau)
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 years agobpf: Split cgroup_bpf_enabled per attach type
Stanislav Fomichev [Fri, 15 Jan 2021 16:35:01 +0000 (08:35 -0800)]
bpf: Split cgroup_bpf_enabled per attach type

When we attach any cgroup hook, the rest (even if unused/unattached) start
to contribute small overhead. In particular, the one we want to avoid is
__cgroup_bpf_run_filter_skb which does two redirections to get to
the cgroup and pushes/pulls skb.

Let's split cgroup_bpf_enabled to be per-attach to make sure
only used attach types trigger.

I've dropped some existing high-level cgroup_bpf_enabled in some
places because BPF_PROG_CGROUP_XXX_RUN macros usually have another
cgroup_bpf_enabled check.

I also had to copy-paste BPF_CGROUP_RUN_SA_PROG_LOCK for
GETPEERNAME/GETSOCKNAME because type for cgroup_bpf_enabled[type]
has to be constant and known at compile time.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20210115163501.805133-4-sdf@google.com