platform/kernel/linux-rpi.git
2 years agolibbpf: Clean up ringbuf size adjustment implementation
Andrii Nakryiko [Tue, 10 May 2022 18:51:59 +0000 (11:51 -0700)]
libbpf: Clean up ringbuf size adjustment implementation

Drop unused iteration variable, move overflow prevention check into the
for loop.

Fixes: 0087a681fa8c ("libbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220510185159.754299-1-andrii@kernel.org
2 years agoMerge branch 'Attach a cookie to a tracing program.'
Andrii Nakryiko [Wed, 11 May 2022 00:47:45 +0000 (17:47 -0700)]
Merge branch 'Attach a cookie to a tracing program.'

Kui-Feng Lee says:

====================

Allow users to attach a 64-bits cookie to a bpf_link of fentry, fexit,
or fmod_ret.

This patchset includes several major changes.

 - Define struct bpf_tramp_links to replace bpf_tramp_prog.
   struct bpf_tramp_links collects bpf_links of a trampoline

 - Generate a trampoline to call bpf_progs of given bpf_links.

 - Trampolines always set/reset bpf_run_ctx before/after
   calling/leaving a tracing program.

 - Attach a cookie to a bpf_link of fentry/fexit/fmod_ret/lsm.  The
   value will be available when running the associated bpf_prog.

Th major differences from v6:

 - bpf_link_create() can create links of BPF_LSM_MAC attach type.

 - Add a test for lsm.

 - Add function proto of bpf_get_attach_cookie() for lsm.

 - Check BPF_LSM_MAC in bpf_prog_has_trampoline().

 - Adapt to the changes of LINK_CREATE made by Andrii.

The major differences from v7:

 - Change stack_size instead of pushing/popping run_ctx.

 - Move cookie to bpf_tramp_link from bpf_tracing_link..

v1: https://lore.kernel.org/all/20220126214809.3868787-1-kuifeng@fb.com/
v2: https://lore.kernel.org/bpf/20220316004231.1103318-1-kuifeng@fb.com/
v3: https://lore.kernel.org/bpf/20220407192552.2343076-1-kuifeng@fb.com/
v4: https://lore.kernel.org/bpf/20220411173429.4139609-1-kuifeng@fb.com/
v5: https://lore.kernel.org/bpf/20220412165555.4146407-1-kuifeng@fb.com/
v6: https://lore.kernel.org/bpf/20220416042940.656344-1-kuifeng@fb.com/
v7: https://lore.kernel.org/bpf/20220508032117.2783209-1-kuifeng@fb.com/
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2 years agoselftest/bpf: The test cases of BPF cookie for fentry/fexit/fmod_ret/lsm.
Kui-Feng Lee [Tue, 10 May 2022 20:59:23 +0000 (13:59 -0700)]
selftest/bpf: The test cases of BPF cookie for fentry/fexit/fmod_ret/lsm.

Make sure BPF cookies are correct for fentry/fexit/fmod_ret/lsm.

Signed-off-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220510205923.3206889-6-kuifeng@fb.com
2 years agolibbpf: Assign cookies to links in libbpf.
Kui-Feng Lee [Tue, 10 May 2022 20:59:22 +0000 (13:59 -0700)]
libbpf: Assign cookies to links in libbpf.

Add a cookie field to the attributes of bpf_link_create().
Add bpf_program__attach_trace_opts() to attach a cookie to a link.

Signed-off-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220510205923.3206889-5-kuifeng@fb.com
2 years agobpf, x86: Attach a cookie to fentry/fexit/fmod_ret/lsm.
Kui-Feng Lee [Tue, 10 May 2022 20:59:21 +0000 (13:59 -0700)]
bpf, x86: Attach a cookie to fentry/fexit/fmod_ret/lsm.

Pass a cookie along with BPF_LINK_CREATE requests.

Add a bpf_cookie field to struct bpf_tracing_link to attach a cookie.
The cookie of a bpf_tracing_link is available by calling
bpf_get_attach_cookie when running the BPF program of the attached
link.

The value of a cookie will be set at bpf_tramp_run_ctx by the
trampoline of the link.

Signed-off-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220510205923.3206889-4-kuifeng@fb.com
2 years agobpf, x86: Create bpf_tramp_run_ctx on the caller thread's stack
Kui-Feng Lee [Tue, 10 May 2022 20:59:20 +0000 (13:59 -0700)]
bpf, x86: Create bpf_tramp_run_ctx on the caller thread's stack

BPF trampolines will create a bpf_tramp_run_ctx, a bpf_run_ctx, on
stacks and set/reset the current bpf_run_ctx before/after calling a
bpf_prog.

Signed-off-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220510205923.3206889-3-kuifeng@fb.com
2 years agobpf, x86: Generate trampolines from bpf_tramp_links
Kui-Feng Lee [Tue, 10 May 2022 20:59:19 +0000 (13:59 -0700)]
bpf, x86: Generate trampolines from bpf_tramp_links

Replace struct bpf_tramp_progs with struct bpf_tramp_links to collect
struct bpf_tramp_link(s) for a trampoline.  struct bpf_tramp_link
extends bpf_link to act as a linked list node.

arch_prepare_bpf_trampoline() accepts a struct bpf_tramp_links to
collects all bpf_tramp_link(s) that a trampoline should call.

Change BPF trampoline and bpf_struct_ops to pass bpf_tramp_links
instead of bpf_tramp_progs.

Signed-off-by: Kui-Feng Lee <kuifeng@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220510205923.3206889-2-kuifeng@fb.com
2 years agoMerge branch 'bpf: Speed up symbol resolving in kprobe multi link'
Alexei Starovoitov [Tue, 10 May 2022 21:42:06 +0000 (14:42 -0700)]
Merge branch 'bpf: Speed up symbol resolving in kprobe multi link'

Jiri Olsa says:

====================

hi,
sending additional fix for symbol resolving in kprobe multi link
requested by Alexei and Andrii [1].

This speeds up bpftrace kprobe attachment, when using pure symbols
(3344 symbols) to attach:

Before:

  # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* {  } i:ms:1 { exit(); }'
  ...
  6.5681 +- 0.0225 seconds time elapsed  ( +-  0.34% )

After:

  # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* {  } i:ms:1 { exit(); }'
  ...
  0.5661 +- 0.0275 seconds time elapsed  ( +-  4.85% )

v6 changes:
  - rewrote patch 1 changelog and fixed the line length [Christoph]

v5 changes:
  - added acks [Masami]
  - workaround in selftest for RCU warning by filtering out several
    functions to attach

v4 changes:
  - fix compile issue [kernel test robot]
  - added acks [Andrii]

v3 changes:
  - renamed kallsyms_lookup_names to ftrace_lookup_symbols
    and moved it to ftrace.c [Masami]
  - added ack [Andrii]
  - couple small test fixes [Andrii]

v2 changes (first version [2]):
  - removed the 2 seconds check [Alexei]
  - moving/forcing symbols sorting out of kallsyms_lookup_names function [Alexei]
  - skipping one array allocation and copy_from_user [Andrii]
  - several small fixes [Masami,Andrii]
  - build fix [kernel test robot]

thanks,
jirka

[1] https://lore.kernel.org/bpf/CAEf4BzZtQaiUxQ-sm_hH2qKPRaqGHyOfEsW96DxtBHRaKLoL3Q@mail.gmail.com/
[2] https://lore.kernel.org/bpf/20220407125224.310255-1-jolsa@kernel.org/
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Add attach bench test
Jiri Olsa [Tue, 10 May 2022 12:26:16 +0000 (14:26 +0200)]
selftests/bpf: Add attach bench test

Adding test that reads all functions from ftrace available_filter_functions
file and attach them all through kprobe_multi API.

It also prints stats info with -v option, like on my setup:

  test_bench_attach: found 48712 functions
  test_bench_attach: attached in   1.069s
  test_bench_attach: detached in   0.373s

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220510122616.2652285-6-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agobpf: Resolve symbols with ftrace_lookup_symbols for kprobe multi link
Jiri Olsa [Tue, 10 May 2022 12:26:15 +0000 (14:26 +0200)]
bpf: Resolve symbols with ftrace_lookup_symbols for kprobe multi link

Using kallsyms_lookup_names function to speed up symbols lookup in
kprobe multi link attachment and replacing with it the current
kprobe_multi_resolve_syms function.

This speeds up bpftrace kprobe attachment:

  # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* {  } i:ms:1 { exit(); }'
  ...
  6.5681 +- 0.0225 seconds time elapsed  ( +-  0.34% )

After:

  # perf stat -r 5 -e cycles ./src/bpftrace -e 'kprobe:x* {  } i:ms:1 { exit(); }'
  ...
  0.5661 +- 0.0275 seconds time elapsed  ( +-  4.85% )

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220510122616.2652285-5-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agofprobe: Resolve symbols with ftrace_lookup_symbols
Jiri Olsa [Tue, 10 May 2022 12:26:14 +0000 (14:26 +0200)]
fprobe: Resolve symbols with ftrace_lookup_symbols

Using ftrace_lookup_symbols to speed up symbols lookup
in register_fprobe_syms API.

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220510122616.2652285-4-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoftrace: Add ftrace_lookup_symbols function
Jiri Olsa [Tue, 10 May 2022 12:26:13 +0000 (14:26 +0200)]
ftrace: Add ftrace_lookup_symbols function

Adding ftrace_lookup_symbols function that resolves array of symbols
with single pass over kallsyms.

The user provides array of string pointers with count and pointer to
allocated array for resolved values.

  int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt,
                            unsigned long *addrs)

It iterates all kallsyms symbols and tries to loop up each in provided
symbols array with bsearch. The symbols array needs to be sorted by
name for this reason.

We also check each symbol to pass ftrace_location, because this API
will be used for fprobe symbols resolving.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220510122616.2652285-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agokallsyms: Make kallsyms_on_each_symbol generally available
Jiri Olsa [Tue, 10 May 2022 12:26:12 +0000 (14:26 +0200)]
kallsyms: Make kallsyms_on_each_symbol generally available

Making kallsyms_on_each_symbol generally available, so it can be
used outside CONFIG_LIVEPATCH option in following changes.

Rather than adding another ifdef option let's make the function
generally available (when CONFIG_KALLSYMS option is defined).

Cc: Christoph Hellwig <hch@lst.de>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20220510122616.2652285-2-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoMerge branch 'bpf: bpf link iterator'
Alexei Starovoitov [Tue, 10 May 2022 18:20:45 +0000 (11:20 -0700)]
Merge branch 'bpf: bpf link iterator'

Dmitrii Dolgov says:

====================

Bpf links seem to be one of the important structures for which no
iterator is provided. Such iterator could be useful in those cases when
generic 'task/file' is not suitable or better performance is needed.

The implementation is mostly copied from prog iterator. This time tests were
executed, although I still had to exclude test_bpf_nf (failed to find BTF info
for global/extern symbol 'bpf_skb_ct_lookup') -- since it's unrelated, I hope
it's a minor issue.

Per suggestion from the previous discussion, there is a new patch for
converting CHECK to corresponding ASSERT_* macro. Such replacement is done only
if the final result would be the same, e.g. CHECK with important-looking custom
formatting strings are still in place -- from what I understand ASSERT_*
doesn't allow to specify such format.

The third small patch fixes what looks like a copy-paste error in the condition
checking.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Add bpf link iter test
Dmitrii Dolgov [Tue, 10 May 2022 15:52:33 +0000 (17:52 +0200)]
selftests/bpf: Add bpf link iter test

Add a simple test for bpf link iterator

Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Link: https://lore.kernel.org/r/20220510155233.9815-5-9erthalion6@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Use ASSERT_* instead of CHECK
Dmitrii Dolgov [Tue, 10 May 2022 15:52:32 +0000 (17:52 +0200)]
selftests/bpf: Use ASSERT_* instead of CHECK

Replace usage of CHECK with a corresponding ASSERT_* macro for bpf_iter
tests. Only done if the final result is equivalent, no changes when
replacement means loosing some information, e.g. from formatting string.

Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Link: https://lore.kernel.org/r/20220510155233.9815-4-9erthalion6@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Fix result check for test_bpf_hash_map
Dmitrii Dolgov [Tue, 10 May 2022 15:52:31 +0000 (17:52 +0200)]
selftests/bpf: Fix result check for test_bpf_hash_map

The original condition looks like a typo, verify the skeleton loading
result instead.

Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Link: https://lore.kernel.org/r/20220510155233.9815-3-9erthalion6@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agobpf: Add bpf_link iterator
Dmitrii Dolgov [Tue, 10 May 2022 15:52:30 +0000 (17:52 +0200)]
bpf: Add bpf_link iterator

Implement bpf_link iterator to traverse links via bpf_seq_file
operations. The changeset is mostly shamelessly copied from
commit a228a64fc1e4 ("bpf: Add bpf_prog iterator")

Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220510155233.9815-2-9erthalion6@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoMerge branch 'Add source ip in bpf tunnel key'
Alexei Starovoitov [Tue, 10 May 2022 17:49:03 +0000 (10:49 -0700)]
Merge branch 'Add source ip in bpf tunnel key'

Kaixi Fan says:

====================
From: Kaixi Fan <fankaixi.li@bytedance.com>

Now bpf code could not set tunnel source ip address of ip tunnel. So it
could not support flow based tunnel mode completely. Because flow based
tunnel mode could set tunnel source, destination ip address and tunnel
key simultaneously.

Flow based tunnel is useful for overlay networks. And by configuring tunnel
source ip address, user could make their networks more elastic.
For example, tunnel source ip could be used to select different egress
nic interface for different flows with same tunnel destination ip. Another
example, user could choose one of multiple ip address of the egress nic
interface as the packet's tunnel source ip.

Add tunnel and tunnel source testcases in test_progs. Other types of
tunnel testcases would be moved to test_progs step by step in the
future.

v6:
- use libbpf api to attach tc progs and remove some shell commands to reduce
  test runtime based on Alexei Starovoitov's suggestion

v5:
- fix some code format errors
- use bpf kernel code at namespace at_ns0 to set tunnel metadata

v4:
- fix subject error of first patch

v3:
- move vxlan tunnel testcases to test_progs
- replace bpf_trace_printk with bpf_printk
- rename bpf kernel prog section name to tic

v2:
- merge vxlan tunnel and tunnel source ip testcases in test_tunnel.sh
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Replace bpf_trace_printk in tunnel kernel code
Kaixi Fan [Sat, 30 Apr 2022 07:48:44 +0000 (15:48 +0800)]
selftests/bpf: Replace bpf_trace_printk in tunnel kernel code

Replace bpf_trace_printk with bpf_printk in test_tunnel_kern.c.
function bpf_printk is more easier and useful than bpf_trace_printk.

Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com>
Link: https://lore.kernel.org/r/20220430074844.69214-4-fankaixi.li@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Move vxlan tunnel testcases to test_progs
Kaixi Fan [Sat, 30 Apr 2022 07:48:43 +0000 (15:48 +0800)]
selftests/bpf: Move vxlan tunnel testcases to test_progs

Move vxlan tunnel testcases from test_tunnel.sh to test_progs.
And add vxlan tunnel source testcases also. Other tunnel testcases
will be moved to test_progs step by step in the future.
Rename bpf program section name as SEC("tc") because test_progs
bpf loader could not load sections with name SEC("gre_set_tunnel").
Because of this, add bpftool to load bpf programs in test_tunnel.sh.

Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com>
Link: https://lore.kernel.org/r/20220430074844.69214-3-fankaixi.li@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agobpf: Add source ip in "struct bpf_tunnel_key"
Kaixi Fan [Sat, 30 Apr 2022 07:48:42 +0000 (15:48 +0800)]
bpf: Add source ip in "struct bpf_tunnel_key"

Add tunnel source ip field in "struct bpf_tunnel_key". Add related code
to set and get tunnel source field.

Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com>
Link: https://lore.kernel.org/r/20220430074844.69214-2-fankaixi.li@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agobpftool: bpf_link_get_from_fd support for LSM programs in lskel
KP Singh [Mon, 9 May 2022 21:49:05 +0000 (21:49 +0000)]
bpftool: bpf_link_get_from_fd support for LSM programs in lskel

bpf_link_get_from_fd currently returns a NULL fd for LSM programs.
LSM programs are similar to tracing programs and can also use
skel_raw_tracepoint_open.

Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220509214905.3754984-1-kpsingh@kernel.org
2 years agoselftests/bpf: Handle batch operations for map-in-map bpf-maps
Takshak Chahande [Tue, 10 May 2022 08:22:21 +0000 (01:22 -0700)]
selftests/bpf: Handle batch operations for map-in-map bpf-maps

This patch adds up test cases that handles 4 combinations:
 a) outer map: BPF_MAP_TYPE_ARRAY_OF_MAPS
    inner maps: BPF_MAP_TYPE_ARRAY and BPF_MAP_TYPE_HASH
 b) outer map: BPF_MAP_TYPE_HASH_OF_MAPS
    inner maps: BPF_MAP_TYPE_ARRAY and BPF_MAP_TYPE_HASH

Signed-off-by: Takshak Chahande <ctakshak@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220510082221.2390540-2-ctakshak@fb.com
2 years agobpf: Extend batch operations for map-in-map bpf-maps
Takshak Chahande [Tue, 10 May 2022 08:22:20 +0000 (01:22 -0700)]
bpf: Extend batch operations for map-in-map bpf-maps

This patch extends batch operations support for map-in-map map-types:
BPF_MAP_TYPE_HASH_OF_MAPS and BPF_MAP_TYPE_ARRAY_OF_MAPS

A usecase where outer HASH map holds hundred of VIP entries and its
associated reuse-ports per VIP stored in REUSEPORT_SOCKARRAY type
inner map, needs to do batch operation for performance gain.

This patch leverages the exiting generic functions for most of the batch
operations. As map-in-map's value contains the actual reference of the inner map,
for BPF_MAP_TYPE_HASH_OF_MAPS type, it needed an extra step to fetch the
map_id from the reference value.

selftests are added in next patch 2/2.

Signed-off-by: Takshak Chahande <ctakshak@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220510082221.2390540-1-ctakshak@fb.com
2 years agobpf: Print some info if disable bpf_jit_enable failed
Tiezhu Yang [Tue, 10 May 2022 03:35:03 +0000 (11:35 +0800)]
bpf: Print some info if disable bpf_jit_enable failed

A user told me that bpf_jit_enable can be disabled on one system, but he
failed to disable bpf_jit_enable on the other system:

  # echo 0 > /proc/sys/net/core/bpf_jit_enable
  bash: echo: write error: Invalid argument

No useful info is available through the dmesg log, a quick analysis shows
that the issue is related with CONFIG_BPF_JIT_ALWAYS_ON.

When CONFIG_BPF_JIT_ALWAYS_ON is enabled, bpf_jit_enable is permanently set
to 1 and setting any other value than that will return failure.

It is better to print some info to tell the user if disable bpf_jit_enable
failed.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1652153703-22729-3-git-send-email-yangtiezhu@loongson.cn
2 years agonet: sysctl: Use SYSCTL_TWO instead of &two
Tiezhu Yang [Tue, 10 May 2022 03:35:02 +0000 (11:35 +0800)]
net: sysctl: Use SYSCTL_TWO instead of &two

It is better to use SYSCTL_TWO instead of &two, and then we can
remove the variable "two" in net/core/sysctl_net_core.c.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1652153703-22729-2-git-send-email-yangtiezhu@loongson.cn
2 years agobpf: Remove unused parameter from find_kfunc_desc_btf()
Yuntao Wang [Thu, 5 May 2022 07:01:14 +0000 (15:01 +0800)]
bpf: Remove unused parameter from find_kfunc_desc_btf()

The func_id parameter in find_kfunc_desc_btf() is not used, get rid of it.

Fixes: 2357672c54c3 ("bpf: Introduce BPF support for kernel module function calls")
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/bpf/20220505070114.3522522-1-ytcoode@gmail.com
2 years agobpftool: Declare generator name
Jason Wang [Mon, 9 May 2022 09:02:47 +0000 (17:02 +0800)]
bpftool: Declare generator name

Most code generators declare its name so did this for bfptool.

Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220509090247.5457-1-jasowang@redhat.com
2 years agosamples: bpf: Don't fail for a missing VMLINUX_BTF when VMLINUX_H is provided
Jerome Marchand [Sat, 7 May 2022 16:16:35 +0000 (18:16 +0200)]
samples: bpf: Don't fail for a missing VMLINUX_BTF when VMLINUX_H is provided

samples/bpf build currently always fails if it can't generate
vmlinux.h from vmlinux, even when vmlinux.h is directly provided by
VMLINUX_H variable, which makes VMLINUX_H pointless.
Only fails when neither method works.

Fixes: 384b6b3bbf0d ("samples: bpf: Add vmlinux.h generation support")
Reported-by: CKI Project <cki-project@redhat.com>
Reported-by: Veronika Kabatova <vkabatov@redhat.com>
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220507161635.2219052-1-jmarchan@redhat.com
2 years agoMerge branch 'bpftool: fix feature output when helper probes fail'
Andrii Nakryiko [Tue, 10 May 2022 00:16:05 +0000 (17:16 -0700)]
Merge branch 'bpftool: fix feature output when helper probes fail'

Milan Landaverde says:

====================

Currently in bpftool's feature probe, we incorrectly tell the user that
all of the helper functions are supported for program types where helper
probing fails or is explicitly unsupported[1]:

$ bpftool feature probe
...
eBPF helpers supported for program type tracing:
- bpf_map_lookup_elem
- bpf_map_update_elem
- bpf_map_delete_elem
...
- bpf_redirect_neigh
- bpf_check_mtu
- bpf_sys_bpf
- bpf_sys_close

This patch adjusts bpftool to relay to the user when helper support
can't be determined:

$ bpftool feature probe
...
eBPF helpers supported for program type lirc_mode2:
    Program type not supported
eBPF helpers supported for program type tracing:
    Could not determine which helpers are available
eBPF helpers supported for program type struct_opts:
    Could not determine which helpers are available
eBPF helpers supported for program type ext:
    Could not determine which helpers are available

Rather than imply that no helpers are available for the program type, we
let the user know that helper function probing failed entirely.

[1] https://lore.kernel.org/bpf/20211217171202.3352835-2-andrii@kernel.org/
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2 years agobpftool: Output message if no helpers found in feature probing
Milan Landaverde [Wed, 4 May 2022 16:13:32 +0000 (12:13 -0400)]
bpftool: Output message if no helpers found in feature probing

Currently in libbpf, we have hardcoded program types that are not
supported for helper function probing (e.g. tracing, ext, lsm).
Due to this (and other legitimate failures), bpftool feature probe returns
empty for those program type helper functions.

Instead of implying to the user that there are no helper functions
available for a program type, we output a message to the user explaining
that helper function probing failed for that program type.

Signed-off-by: Milan Landaverde <milan@mdaverde.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220504161356.3497972-3-milan@mdaverde.com
2 years agobpftool: Adjust for error codes from libbpf probes
Milan Landaverde [Wed, 4 May 2022 16:13:31 +0000 (12:13 -0400)]
bpftool: Adjust for error codes from libbpf probes

Originally [1], libbpf's (now deprecated) probe functions returned a bool
to acknowledge support but the new APIs return an int with a possible
negative error code to reflect probe failure. This change decides for
bpftool to declare maps and helpers are not available on probe failures.

[1]: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org/

Signed-off-by: Milan Landaverde <milan@mdaverde.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220504161356.3497972-2-milan@mdaverde.com
2 years agoselftests/bpf: Test libbpf's ringbuf size fix up logic
Andrii Nakryiko [Mon, 9 May 2022 00:41:48 +0000 (17:41 -0700)]
selftests/bpf: Test libbpf's ringbuf size fix up logic

Make sure we always excercise libbpf's ringbuf map size adjustment logic
by specifying non-zero size that's definitely not a page size multiple.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-10-andrii@kernel.org
2 years agolibbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary
Andrii Nakryiko [Mon, 9 May 2022 00:41:47 +0000 (17:41 -0700)]
libbpf: Automatically fix up BPF_MAP_TYPE_RINGBUF size, if necessary

Kernel imposes a pretty particular restriction on ringbuf map size. It
has to be a power-of-2 multiple of page size. While generally this isn't
hard for user to satisfy, sometimes it's impossible to do this
declaratively in BPF source code or just plain inconvenient to do at
runtime.

One such example might be BPF libraries that are supposed to work on
different architectures, which might not agree on what the common page
size is.

Let libbpf find the right size for user instead, if it turns out to not
satisfy kernel requirements. If user didn't set size at all, that's most
probably a mistake so don't upsize such zero size to one full page,
though. Also we need to be careful about not overflowing __u32
max_entries.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-9-andrii@kernel.org
2 years agolibbpf: Provide barrier() and barrier_var() in bpf_helpers.h
Andrii Nakryiko [Mon, 9 May 2022 00:41:46 +0000 (17:41 -0700)]
libbpf: Provide barrier() and barrier_var() in bpf_helpers.h

Add barrier() and barrier_var() macros into bpf_helpers.h to be used by
end users. While a bit advanced and specialized instruments, they are
sometimes indispensable. Instead of requiring each user to figure out
exact asm volatile incantations for themselves, provide them from
bpf_helpers.h.

Also remove conflicting definitions from selftests. Some tests rely on
barrier_var() definition being nothing, those will still work as libbpf
does the #ifndef/#endif guarding for barrier() and barrier_var(),
allowing users to redefine them, if necessary.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-8-andrii@kernel.org
2 years agoselftests/bpf: Add bpf_core_field_offset() tests
Andrii Nakryiko [Mon, 9 May 2022 00:41:45 +0000 (17:41 -0700)]
selftests/bpf: Add bpf_core_field_offset() tests

Add test cases for bpf_core_field_offset() helper.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-7-andrii@kernel.org
2 years agolibbpf: Complete field-based CO-RE helpers with field offset helper
Andrii Nakryiko [Mon, 9 May 2022 00:41:44 +0000 (17:41 -0700)]
libbpf: Complete field-based CO-RE helpers with field offset helper

Add bpf_core_field_offset() helper to complete field-based CO-RE
helpers. This helper can be useful for feature-detection and for some
more advanced cases of field reading (e.g., reading flexible array members).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-6-andrii@kernel.org
2 years agoselftests/bpf: Use both syntaxes for field-based CO-RE helpers
Andrii Nakryiko [Mon, 9 May 2022 00:41:43 +0000 (17:41 -0700)]
selftests/bpf: Use both syntaxes for field-based CO-RE helpers

Excercise both supported forms of bpf_core_field_exists() and
bpf_core_field_size() helpers: variable-based field reference and
type/field name-based one.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-5-andrii@kernel.org
2 years agolibbpf: Improve usability of field-based CO-RE helpers
Andrii Nakryiko [Mon, 9 May 2022 00:41:42 +0000 (17:41 -0700)]
libbpf: Improve usability of field-based CO-RE helpers

Allow to specify field reference in two ways:

  - if user has variable of necessary type, they can use variable-based
    reference (my_var.my_field or my_var_ptr->my_field). This was the
    only supported syntax up till now.
  - now, bpf_core_field_exists() and bpf_core_field_size() support also
    specifying field in a fashion similar to offsetof() macro, by
    specifying type of the containing struct/union separately and field
    name separately: bpf_core_field_exists(struct my_type, my_field).
    This forms is quite often more convenient in practice and it matches
    type-based CO-RE helpers that support specifying type by its name
    without requiring any variables.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-4-andrii@kernel.org
2 years agolibbpf: Make __kptr and __kptr_ref unconditionally use btf_type_tag() attr
Andrii Nakryiko [Mon, 9 May 2022 00:41:41 +0000 (17:41 -0700)]
libbpf: Make __kptr and __kptr_ref unconditionally use btf_type_tag() attr

It will be annoying and surprising for users of __kptr and __kptr_ref if
libbpf silently ignores them just because Clang used for compilation
didn't support btf_type_tag(). It's much better to get clear compiler
error than debug BPF verifier failures later on.

Fixes: ef89654f2bc7 ("libbpf: Add kptr type tag macros to bpf_helpers.h")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-3-andrii@kernel.org
2 years agoselftests/bpf: Prevent skeleton generation race
Andrii Nakryiko [Mon, 9 May 2022 00:41:40 +0000 (17:41 -0700)]
selftests/bpf: Prevent skeleton generation race

Prevent "classic" and light skeleton generation rules from stomping on
each other's toes due to the use of the same <obj>.linked{1,2,3}.o
naming pattern. There is no coordination and synchronizataion between
.skel.h and .lskel.h rules, so they can easily overwrite each other's
intermediate object files, leading to errors like:

  /bin/sh: line 1: 170928 Bus error               (core dumped)
  /data/users/andriin/linux/tools/testing/selftests/bpf/tools/sbin/bpftool gen skeleton
  /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.linked3.o
  name test_ksyms_weak
  > /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h
  make: *** [Makefile:507: /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h] Error 135
  make: *** Deleting file '/data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h'

Fix by using different suffix for light skeleton rule.

Fixes: c48e51c8b07a ("bpf: selftests: Add selftests for module kfunc support")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220509004148.1801791-2-andrii@kernel.org
2 years agoselftests/bpf: Fix two memory leaks in prog_tests
Mykola Lysenko [Thu, 28 Apr 2022 22:57:44 +0000 (15:57 -0700)]
selftests/bpf: Fix two memory leaks in prog_tests

Fix log_fp memory leak in dispatch_thread_read_log.
Remove obsolete log_fp clean-up code in dispatch_thread.

Also, release memory of subtest_selector. This can be
reproduced with -n 2/1 parameters.

Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220428225744.1961643-1-mykolal@fb.com
2 years agoMerge branch 'libbpf: allow to opt-out from BPF map creation'
Alexei Starovoitov [Fri, 29 Apr 2022 03:03:30 +0000 (20:03 -0700)]
Merge branch 'libbpf: allow to opt-out from BPF map creation'

Andrii Nakryiko says:

====================

Add bpf_map__set_autocreate() API which is a BPF map counterpart of
bpf_program__set_autoload() and serves similar goal of allowing to build more
flexible CO-RE applications. See patch #3 for example scenarios in which the
need for such API came up previously.

Patch #1 is a follow-up patch to previous patch set adding verifier log fixup
logic, making sure bpf_core_format_spec()'s return result is used for
something useful.

Patch #2 is a small refactoring to avoid unnecessary verbose memory management
around obj->maps array.

Patch #3 adds and API and corresponding BPF verifier log fix up logic to
provide human-comprehensible error message with useful details.

Patch #4 adds a simple selftest validating both the API itself and libbpf's
log fixup logic for it.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Test bpf_map__set_autocreate() and related log fixup logic
Andrii Nakryiko [Thu, 28 Apr 2022 04:15:23 +0000 (21:15 -0700)]
selftests/bpf: Test bpf_map__set_autocreate() and related log fixup logic

Add a subtest that excercises bpf_map__set_autocreate() API and
validates that libbpf properly fixes up BPF verifier log with correct
map information.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220428041523.4089853-5-andrii@kernel.org
2 years agolibbpf: Allow to opt-out from creating BPF maps
Andrii Nakryiko [Thu, 28 Apr 2022 04:15:22 +0000 (21:15 -0700)]
libbpf: Allow to opt-out from creating BPF maps

Add bpf_map__set_autocreate() API that allows user to opt-out from
libbpf automatically creating BPF map during BPF object load.

This is a useful feature when building CO-RE-enabled BPF application
that takes advantage of some new-ish BPF map type (e.g., socket-local
storage) if kernel supports it, but otherwise uses some alternative way
(e.g., extra HASH map). In such case, being able to disable the creation
of a map that kernel doesn't support allows to successfully create and
load BPF object file with all its other maps and programs.

It's still up to user to make sure that no "live" code in any of their BPF
programs are referencing such map instance, which can be achieved by
guarding such code with CO-RE relocation check or by using .rodata
global variables.

If user fails to properly guard such code to turn it into "dead code",
libbpf will helpfully post-process BPF verifier log and will provide
more meaningful error and map name that needs to be guarded properly. As
such, instead of:

  ; value = bpf_map_lookup_elem(&missing_map, &zero);
  4: (85) call unknown#2001000000
  invalid func unknown#2001000000

... user will see:

  ; value = bpf_map_lookup_elem(&missing_map, &zero);
  4: <invalid BPF map reference>
  BPF map 'missing_map' is referenced but wasn't created

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220428041523.4089853-4-andrii@kernel.org
2 years agolibbpf: Use libbpf_mem_ensure() when allocating new map
Andrii Nakryiko [Thu, 28 Apr 2022 04:15:21 +0000 (21:15 -0700)]
libbpf: Use libbpf_mem_ensure() when allocating new map

Reuse libbpf_mem_ensure() when adding a new map to the list of maps
inside bpf_object. It takes care of proper resizing and reallocating of
map array and zeroing out newly allocated memory.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220428041523.4089853-3-andrii@kernel.org
2 years agolibbpf: Append "..." in fixed up log if CO-RE spec is truncated
Andrii Nakryiko [Thu, 28 Apr 2022 04:15:20 +0000 (21:15 -0700)]
libbpf: Append "..." in fixed up log if CO-RE spec is truncated

Detect CO-RE spec truncation and append "..." to make user aware that
there was supposed to be more of the spec there.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220428041523.4089853-2-andrii@kernel.org
2 years agoselftests/bpf: Use target-less SEC() definitions in various tests
Andrii Nakryiko [Thu, 28 Apr 2022 18:53:49 +0000 (11:53 -0700)]
selftests/bpf: Use target-less SEC() definitions in various tests

Add new or modify existing SEC() definitions to be target-less and
validate that libbpf handles such program definitions correctly.

For kprobe/kretprobe we also add explicit test that generic
bpf_program__attach() works in cases when kprobe definition contains
proper target. It wasn't previously tested as selftests code always
explicitly specified the target regardless.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20220428185349.3799599-4-andrii@kernel.org
2 years agolibbpf: Support target-less SEC() definitions for BTF-backed programs
Andrii Nakryiko [Thu, 28 Apr 2022 18:53:48 +0000 (11:53 -0700)]
libbpf: Support target-less SEC() definitions for BTF-backed programs

Similar to previous patch, support target-less definitions like
SEC("fentry"), SEC("freplace"), etc. For such BTF-backed program types
it is expected that user will specify BTF target programmatically at
runtime using bpf_program__set_attach_target() *before* load phase. If
not, libbpf will report this as an error.

Aslo use SEC_ATTACH_BTF flag instead of explicitly listing a set of
types that are expected to require attach_btf_id. This was an accidental
omission during custom SEC() support refactoring.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20220428185349.3799599-3-andrii@kernel.org
2 years agolibbpf: Allow "incomplete" basic tracing SEC() definitions
Andrii Nakryiko [Thu, 28 Apr 2022 18:53:47 +0000 (11:53 -0700)]
libbpf: Allow "incomplete" basic tracing SEC() definitions

In a lot of cases the target of kprobe/kretprobe, tracepoint, raw
tracepoint, etc BPF program might not be known at the compilation time
and will be discovered at runtime. This was always a supported case by
libbpf, with APIs like bpf_program__attach_{kprobe,tracepoint,etc}()
accepting full target definition, regardless of what was defined in
SEC() definition in BPF source code.

Unfortunately, up till now libbpf still enforced users to specify at
least something for the fake target, e.g., SEC("kprobe/whatever"), which
is cumbersome and somewhat misleading.

This patch allows target-less SEC() definitions for basic tracing BPF
program types:

  - kprobe/kretprobe;
  - multi-kprobe/multi-kretprobe;
  - tracepoints;
  - raw tracepoints.

Such target-less SEC() definitions are meant to specify declaratively
proper BPF program type only. Attachment of them will have to be handled
programmatically using correct APIs. As such, skeleton's auto-attachment
of such BPF programs is skipped and generic bpf_program__attach() will
fail, if attempted, due to the lack of enough target information.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20220428185349.3799599-2-andrii@kernel.org
2 years agobpf, sockmap: Call skb_linearize only when required in sk_psock_skb_ingress_enqueue
Liu Jian [Wed, 27 Apr 2022 11:51:50 +0000 (19:51 +0800)]
bpf, sockmap: Call skb_linearize only when required in sk_psock_skb_ingress_enqueue

The skb_to_sgvec fails only when the number of frag_list and frags
exceeds MAX_MSG_FRAGS. Therefore, we can call skb_linearize only
when the conversion fails.

Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20220427115150.210213-1-liujian56@huawei.com
2 years agobpf, docs: Fix typo "respetively" to "respectively"
Tiezhu Yang [Thu, 28 Apr 2022 09:55:54 +0000 (17:55 +0800)]
bpf, docs: Fix typo "respetively" to "respectively"

"respetively" should be "respectively".

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1651139754-4838-4-git-send-email-yangtiezhu@loongson.cn
2 years agobpf, docs: BPF_FROM_BE exists as alias for BPF_TO_BE
Tiezhu Yang [Thu, 28 Apr 2022 09:55:53 +0000 (17:55 +0800)]
bpf, docs: BPF_FROM_BE exists as alias for BPF_TO_BE

According to include/uapi/linux/bpf.h:

  #define BPF_FROM_LE BPF_TO_LE
  #define BPF_FROM_BE BPF_TO_BE

BPF_FROM_BE exists as alias for BPF_TO_BE instead of BPF_TO_LE.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1651139754-4838-3-git-send-email-yangtiezhu@loongson.cn
2 years agobpf, docs: Remove duplicated word "instructions"
Tiezhu Yang [Thu, 28 Apr 2022 09:55:52 +0000 (17:55 +0800)]
bpf, docs: Remove duplicated word "instructions"

The word "instructions" is duplicated, remove it.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/1651139754-4838-2-git-send-email-yangtiezhu@loongson.cn
2 years agosamples/bpf: Detach xdp prog when program exits unexpectedly in xdp_rxq_info_user
Zhengchao Shao [Wed, 27 Apr 2022 06:23:38 +0000 (14:23 +0800)]
samples/bpf: Detach xdp prog when program exits unexpectedly in xdp_rxq_info_user

When xdp_rxq_info_user program exits unexpectedly, it doesn't detach xdp
prog of device, and other xdp prog can't be attached to the device. So
call init_exit() to detach xdp prog when program exits unexpectedly.

Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220427062338.80173-1-shaozhengchao@huawei.com
2 years agobpf/selftests: Add granular subtest output for prog_test
Mykola Lysenko [Wed, 27 Apr 2022 04:13:53 +0000 (21:13 -0700)]
bpf/selftests: Add granular subtest output for prog_test

Implement per subtest log collection for both parallel
and sequential test execution. This allows granular
per-subtest error output in the 'All error logs' section.
Add subtest log transfer into the protocol during the
parallel test execution.

Move all test log printing logic into dump_test_log
function. One exception is the output of test names when
verbose printing is enabled. Move test name/result
printing into separate functions to avoid repetition.

Print all successful subtest results in the log. Print
only failed test logs when test does not have subtests.
Or only failed subtests' logs when test has subtests.

Disable 'All error logs' output when verbose mode is
enabled. This functionality was already broken and is
causing confusion.

Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220427041353.246007-1-mykolal@fb.com
2 years agoMerge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Jakub Kicinski [Thu, 28 Apr 2022 00:09:31 +0000 (17:09 -0700)]
Merge https://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2022-04-27

We've added 85 non-merge commits during the last 18 day(s) which contain
a total of 163 files changed, 4499 insertions(+), 1521 deletions(-).

The main changes are:

1) Teach libbpf to enhance BPF verifier log with human-readable and relevant
   information about failed CO-RE relocations, from Andrii Nakryiko.

2) Add typed pointer support in BPF maps and enable it for unreferenced pointers
   (via probe read) and referenced ones that can be passed to in-kernel helpers,
   from Kumar Kartikeya Dwivedi.

3) Improve xsk to break NAPI loop when rx queue gets full to allow for forward
   progress to consume descriptors, from Maciej Fijalkowski & Björn Töpel.

4) Fix a small RCU read-side race in BPF_PROG_RUN routines which dereferenced
   the effective prog array before the rcu_read_lock, from Stanislav Fomichev.

5) Implement BPF atomic operations for RV64 JIT, and add libbpf parsing logic
   for USDT arguments under riscv{32,64}, from Pu Lehui.

6) Implement libbpf parsing of USDT arguments under aarch64, from Alan Maguire.

7) Enable bpftool build for musl and remove nftw with FTW_ACTIONRETVAL usage
   so it can be shipped under Alpine which is musl-based, from Dominique Martinet.

8) Clean up {sk,task,inode} local storage trace RCU handling as they do not
   need to use call_rcu_tasks_trace() barrier, from KP Singh.

9) Improve libbpf API documentation and fix error return handling of various
   API functions, from Grant Seltzer.

10) Enlarge offset check for bpf_skb_{load,store}_bytes() helpers given data
    length of frags + frag_list may surpass old offset limit, from Liu Jian.

11) Various improvements to prog_tests in area of logging, test execution
    and by-name subtest selection, from Mykola Lysenko.

12) Simplify map_btf_id generation for all map types by moving this process
    to build time with help of resolve_btfids infra, from Menglong Dong.

13) Fix a libbpf bug in probing when falling back to legacy bpf_probe_read*()
    helpers; the probing caused always to use old helpers, from Runqing Yang.

14) Add support for ARCompact and ARCv2 platforms for libbpf's PT_REGS
    tracing macros, from Vladimir Isaev.

15) Cleanup BPF selftests to remove old & unneeded rlimit code given kernel
    switched to memcg-based memory accouting a while ago, from Yafang Shao.

16) Refactor of BPF sysctl handlers to move them to BPF core, from Yan Zhu.

17) Fix BPF selftests in two occasions to work around regressions caused by latest
    LLVM to unblock CI until their fixes are worked out, from Yonghong Song.

18) Misc cleanups all over the place, from various others.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (85 commits)
  selftests/bpf: Add libbpf's log fixup logic selftests
  libbpf: Fix up verifier log for unguarded failed CO-RE relos
  libbpf: Simplify bpf_core_parse_spec() signature
  libbpf: Refactor CO-RE relo human description formatting routine
  libbpf: Record subprog-resolved CO-RE relocations unconditionally
  selftests/bpf: Add CO-RE relos and SEC("?...") to linked_funcs selftests
  libbpf: Avoid joining .BTF.ext data with BPF programs by section name
  libbpf: Fix logic for finding matching program for CO-RE relocation
  libbpf: Drop unhelpful "program too large" guess
  libbpf: Fix anonymous type check in CO-RE logic
  bpf: Compute map_btf_id during build time
  selftests/bpf: Add test for strict BTF type check
  selftests/bpf: Add verifier tests for kptr
  selftests/bpf: Add C tests for kptr
  libbpf: Add kptr type tag macros to bpf_helpers.h
  bpf: Make BTF type match stricter for release arguments
  bpf: Teach verifier about kptr_get kfunc helpers
  bpf: Wire up freeing of referenced kptr
  bpf: Populate pairs of btf_id and destructor kfunc in btf
  bpf: Adapt copy_map_value for multiple offset case
  ...
====================

Link: https://lore.kernel.org/r/20220427224758.20976-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dsa: ksz9477: move get_stats64 to ksz_common.c
Arun Ramadoss [Tue, 26 Apr 2022 09:10:48 +0000 (14:40 +0530)]
net: dsa: ksz9477: move get_stats64 to ksz_common.c

The mib counters for the ksz9477 is same for the ksz9477 switch and
LAN937x switch. Hence moving it to ksz_common.c file in order to have it
generic function. The DSA hook get_stats64 now can call ksz_get_stats64.

Signed-off-by: Arun Ramadoss <arun.ramadoss@microchip.com>
Reviewed-by: Oleksij Rempel <o.rempel@pengutronix.de>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vladimir Oltean <olteanv@gmail.com>
Link: https://lore.kernel.org/r/20220426091048.9311-1-arun.ramadoss@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'remove-virt_to_bus-drivers'
David S. Miller [Wed, 27 Apr 2022 11:22:56 +0000 (12:22 +0100)]
Merge branch 'remove-virt_to_bus-drivers'

Jakub Kicinski says:

====================
net: remove non-Ethernet drivers using virt_to_bus()

Networking is currently the main offender in using virt_to_bus().
Frankly all the drivers which use it are super old and unlikely
to be used today. They are just an ongoing maintenance burden.

In other words this series is using virt_to_bus() as an excuse
to shed some old stuff. Having done the tree-wide dev_addr_set()
conversion recently I have limited sympathy for carrying dead
code.

Obviously please scream if any of these drivers _is_ in fact
still being used. Otherwise let's take the chance, we can always
apologize and revert if users show up later.

Also I should say thanks to everyone who contributed to this code!
The work continues to be appreciated although realistically in more
of a "history book" fashion...
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: hamradio: remove support for DMA SCC devices
Jakub Kicinski [Tue, 26 Apr 2022 17:54:36 +0000 (10:54 -0700)]
net: hamradio: remove support for DMA SCC devices

Another incarnation of Z8530, looks like? Again, no real changes
in the git history, and it needs VIRT_TO_BUS. Unlikely to have
users, let's spend less time refactoring dead code...

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wan: remove support for Z85230-based devices
Jakub Kicinski [Tue, 26 Apr 2022 17:54:35 +0000 (10:54 -0700)]
net: wan: remove support for Z85230-based devices

Looks like all the changes to this driver had been automated
churn since git era begun. The driver is using virt_to_bus(),
it's just a maintenance burden unlikely to have any users.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: wan: remove support for COSA and SRP synchronous serial boards
Jakub Kicinski [Tue, 26 Apr 2022 17:54:34 +0000 (10:54 -0700)]
net: wan: remove support for COSA and SRP synchronous serial boards

Looks like all the changes to this driver had been automated
churn since git era begun. The driver is using virt_to_bus()
so it should be updated to a proper DMA API or removed. Given
the latest "news" entry on the website is from 1999 I'm opting
for the latter.

I'm marking the allocated char device major number as [REMOVED],
I reckon we can't reuse it in case some SW out there assumes its
COSA?

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: atm: remove support for ZeitNet ZN122x ATM devices
Jakub Kicinski [Tue, 26 Apr 2022 17:54:33 +0000 (10:54 -0700)]
net: atm: remove support for ZeitNet ZN122x ATM devices

This driver received nothing but automated fixes in the last 15 years.
Since it's using virt_to_bus it's unlikely to be used on any modern
platform.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: atm: remove support for Madge Horizon ATM devices
Jakub Kicinski [Tue, 26 Apr 2022 17:54:32 +0000 (10:54 -0700)]
net: atm: remove support for Madge Horizon ATM devices

This driver received nothing but automated fixes since git era begun.
Since it's using virt_to_bus it's unlikely to be used on any modern
platform.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: atm: remove support for Fujitsu FireStream ATM devices
Jakub Kicinski [Tue, 26 Apr 2022 17:54:31 +0000 (10:54 -0700)]
net: atm: remove support for Fujitsu FireStream ATM devices

This driver received nothing but automated fixes (mostly spelling
and compiler warnings) since git era begun. Since it's using
virt_to_bus it's unlikely to be used on any modern platform.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'lan966x-ptp-programmable-pins'
David S. Miller [Wed, 27 Apr 2022 11:03:18 +0000 (12:03 +0100)]
Merge branch 'lan966x-ptp-programmable-pins'

Horatiu Vultur says:

====================
net: lan966x: Add support for PTP programmable pins

Lan966x has 8 PTP programmable pins. The last pin is hardcoded to be used
by PHC0 and all the rest are shareable between the PHCs. The PTP pins can
implement both extts and perout functions.

v1->v2:
- use ptp_find_pin_unlocked instead of ptp_find_pin inside the irq handler.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Add support for PTP_PF_EXTTS
Horatiu Vultur [Wed, 27 Apr 2022 06:51:27 +0000 (08:51 +0200)]
net: lan966x: Add support for PTP_PF_EXTTS

Extend the PTP programmable pins to implement also PTP_PF_EXTTS
function. The PTP pin can be configured to capture only on the rising
edge of the PPS signal. And once an event is seen then an interrupt is
generated and the local time counter is saved.
The interrupt is shared between all the pins.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Add support for PTP_PF_PEROUT
Horatiu Vultur [Wed, 27 Apr 2022 06:51:26 +0000 (08:51 +0200)]
net: lan966x: Add support for PTP_PF_PEROUT

Lan966x has 8 PTP programmable pins, where the last pins is hardcoded to
be used by PHC0, which does the frame timestamping. All the rest of the
PTP pins can be shared between the PHCs and can have different functions
like perout or extts. For now add support for PTP_FS_PEROUT.
The HW is not able to support absolute start time but can use the nsec
for phase adjustment when generating PPS.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Add registers used to configure the PTP pin
Horatiu Vultur [Wed, 27 Apr 2022 06:51:25 +0000 (08:51 +0200)]
net: lan966x: Add registers used to configure the PTP pin

Add registers that are used to configure the PTP pins. These registers
are used to enable the interrupts per PTP pin and to set the waveform
generated by the pin.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: lan966x: Change the PTP pin used to read/write the PHC.
Horatiu Vultur [Wed, 27 Apr 2022 06:51:24 +0000 (08:51 +0200)]
net: lan966x: Change the PTP pin used to read/write the PHC.

To read/write a value to a PHC, it is required to use a PTP pin.
Currently it is used pin 5, but change to pin 7 as is the last pin.
All the other pins will have different functions.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agodt-bindings: net: lan966x: Extend with the ptp external interrupt.
Horatiu Vultur [Wed, 27 Apr 2022 06:51:23 +0000 (08:51 +0200)]
dt-bindings: net: lan966x: Extend with the ptp external interrupt.

Extend dt-bindings for lan966x with ptp external interrupt. This is
generated when an external 1pps signal is received on the ptp pin.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'mptcp-MP_FAIL-timeout'
David S. Miller [Wed, 27 Apr 2022 09:45:55 +0000 (10:45 +0100)]
Merge branch 'mptcp-MP_FAIL-timeout'

Mat Martineau says:

====================
mptcp: Timeout for MP_FAIL response

When one peer sends an infinite mapping to coordinate fallback from
MPTCP to regular TCP, the other peer is expected to send a packet with
the MPTCP MP_FAIL option to acknowledge the infinite mapping. Rather
than leave the connection in some half-fallback state, this series adds
a timeout after which the infinite mapping sender will reset the
connection.

Patch 1 adds a fallback self test.

Patches 2-5 make use of the MPTCP socket's retransmit timer to reset the
MPTCP connection if no MP_FAIL was received.

Patches 6 and 7 extends the self test to check MP_FAIL-related MIBs.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoselftests: mptcp: print extra msg in chk_csum_nr
Geliang Tang [Tue, 26 Apr 2022 21:57:17 +0000 (14:57 -0700)]
selftests: mptcp: print extra msg in chk_csum_nr

When the multiple checksum errors occur in chk_csum_nr(), print the
numbers of the errors as an extra message.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoselftests: mptcp: check MP_FAIL response mibs
Geliang Tang [Tue, 26 Apr 2022 21:57:16 +0000 (14:57 -0700)]
selftests: mptcp: check MP_FAIL response mibs

This patch extends chk_fail_nr to check the MP_FAIL response mibs.

Add a new argument invert for chk_fail_nr to allow it can check the
MP_FAIL TX and RX mibs from the opposite direction.

When the infinite map is received before the MP_FAIL response, the
response will be lost. A '-' can be added into fail_tx or fail_rx to
represent that MP_FAIL response TX or RX can be lost when doing the
checks.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomptcp: reset subflow when MP_FAIL doesn't respond
Geliang Tang [Tue, 26 Apr 2022 21:57:15 +0000 (14:57 -0700)]
mptcp: reset subflow when MP_FAIL doesn't respond

This patch adds a new msk->flags bit MPTCP_FAIL_NO_RESPONSE, then reuses
sk_timer to trigger a check if we have not received a response from the
peer after sending MP_FAIL. If the peer doesn't respond properly, reset
the subflow.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomptcp: add MP_FAIL response support
Geliang Tang [Tue, 26 Apr 2022 21:57:14 +0000 (14:57 -0700)]
mptcp: add MP_FAIL response support

This patch adds a new struct member mp_fail_response_expect in struct
mptcp_subflow_context to support MP_FAIL response. In the single subflow
with checksum error and contiguous data special case, a MP_FAIL is sent
in response to another MP_FAIL.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomptcp: add data lock for sk timers
Geliang Tang [Tue, 26 Apr 2022 21:57:13 +0000 (14:57 -0700)]
mptcp: add data lock for sk timers

mptcp_data_lock() needs to be held when manipulating the msk
retransmit_timer or the sk sk_timer. This patch adds the data
lock for the both timers.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agomptcp: use mptcp_stop_timer
Geliang Tang [Tue, 26 Apr 2022 21:57:12 +0000 (14:57 -0700)]
mptcp: use mptcp_stop_timer

Use the helper mptcp_stop_timer() instead of using sk_stop_timer() to
stop icsk_retransmit_timer directly.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoselftests: mptcp: add infinite map testcase
Geliang Tang [Tue, 26 Apr 2022 21:57:11 +0000 (14:57 -0700)]
selftests: mptcp: add infinite map testcase

Add the single subflow test case for MP_FAIL, to test the infinite
mapping case. Use the test_linkfail value to make 128KB test files.

Add a new function reset_with_fail(), in it use 'iptables' and 'tc
action pedit' rules to produce the bit flips to trigger the checksum
failures. Set validate_checksum to enable checksums for the MP_FAIL
tests without passing the '-C' argument. Set check_invert flag to
enable the invert bytes check for the output data in check_transfer().
Instead of the file mismatch error, this test prints out the inverted
bytes.

Add a new function pedit_action_pkts() to get the numbers of the packets
edited by the tc pedit actions. Print this numbers to the output.

Also add the needed kernel configures in the selftests config file.

Suggested-by: Davide Caratti <dcaratti@redhat.com>
Co-developed-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net>
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: stmmac: dwmac-imx: comment spelling fix
Marcel Ziswiler [Mon, 25 Apr 2022 15:48:56 +0000 (17:48 +0200)]
net: stmmac: dwmac-imx: comment spelling fix

Fix spelling in comment.

Fixes: 94abdad6974a ("net: ethernet: dwmac: add ethernet glue logic for NXP imx8 chip")
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Link: https://lore.kernel.org/r/20220425154856.169499-1-marcel@ziswiler.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: remove comments that mention obsolete __SLOW_DOWN_IO
Bjorn Helgaas [Mon, 25 Apr 2022 21:26:44 +0000 (16:26 -0500)]
net: remove comments that mention obsolete __SLOW_DOWN_IO

The only remaining definitions of __SLOW_DOWN_IO (for alpha and ia64) do
nothing, and the only mentions in networking are in comments.  Remove these
mentions.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: wan: atp: remove unused eeprom_delay()
Bjorn Helgaas [Mon, 25 Apr 2022 21:26:43 +0000 (16:26 -0500)]
net: wan: atp: remove unused eeprom_delay()

atp.h is included only by atp.c, which does not use eeprom_delay().  Remove
the unused definition.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: tls: fix async vs NIC crypto offload
Jakub Kicinski [Mon, 25 Apr 2022 23:33:09 +0000 (16:33 -0700)]
net: tls: fix async vs NIC crypto offload

When NIC takes care of crypto (or the record has already
been decrypted) we forget to update darg->async. ->async
is supposed to mean whether record is async capable on
input and whether record has been queued for async crypto
on output.

Reported-by: Gal Pressman <gal@nvidia.com>
Fixes: 3547a1f9d988 ("tls: rx: use async as an in-out argument")
Tested-by: Gal Pressman <gal@nvidia.com>
Link: https://lore.kernel.org/r/20220425233309.344858-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: dsa: mt753x: fix pcs conversion regression
Russell King (Oracle) [Mon, 25 Apr 2022 21:28:02 +0000 (22:28 +0100)]
net: dsa: mt753x: fix pcs conversion regression

Daniel Golle reports that the conversion of mt753x to phylink PCS caused
an oops as below.

The problem is with the placement of the PCS initialisation, which
occurs after mt7531_setup() has been called. However, burited in this
function is a call to setup the CPU port, which requires the PCS
structure to be already setup.

Fix this by changing the initialisation order.

Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020
Mem abort info:
  ESR = 0x96000005
  EC = 0x25: DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
  FSC = 0x05: level 1 translation fault
Data abort info:
  ISV = 0, ISS = 0x00000005
  CM = 0, WnR = 0
user pgtable: 4k pages, 39-bit VAs, pgdp=0000000046057000
[0000000000000020] pgd=0000000000000000, p4d=0000000000000000, pud=0000000000000000
Internal error: Oops: 96000005 [#1] SMP
Modules linked in:
CPU: 0 PID: 32 Comm: kworker/u4:1 Tainted: G S 5.18.0-rc3-next-20220422+ #0
Hardware name: Bananapi BPI-R64 (DT)
Workqueue: events_unbound deferred_probe_work_func
pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : mt7531_cpu_port_config+0xcc/0x1b0
lr : mt7531_cpu_port_config+0xc0/0x1b0
sp : ffffffc008d5b980
x29: ffffffc008d5b990 x28: ffffff80060562c8 x27: 00000000f805633b
x26: ffffff80001a8880 x25: 00000000000009c4 x24: 0000000000000016
x23: ffffff8005eb6470 x22: 0000000000003600 x21: ffffff8006948080
x20: 0000000000000000 x19: 0000000000000006 x18: 0000000000000000
x17: 0000000000000001 x16: 0000000000000001 x15: 02963607fcee069e
x14: 0000000000000000 x13: 0000000000000030 x12: 0101010101010101
x11: ffffffc037302000 x10: 0000000000000870 x9 : ffffffc008d5b800
x8 : ffffff800028f950 x7 : 0000000000000001 x6 : 00000000662b3000
x5 : 00000000000002f0 x4 : 0000000000000000 x3 : ffffff800028f080
x2 : 0000000000000000 x1 : ffffff800028f080 x0 : 0000000000000000
Call trace:
 mt7531_cpu_port_config+0xcc/0x1b0
 mt753x_cpu_port_enable+0x24/0x1f0
 mt7531_setup+0x49c/0x5c0
 mt753x_setup+0x20/0x31c
 dsa_register_switch+0x8bc/0x1020
 mt7530_probe+0x118/0x200
 mdio_probe+0x30/0x64
 really_probe.part.0+0x98/0x280
 __driver_probe_device+0x94/0x140
 driver_probe_device+0x40/0x114
 __device_attach_driver+0xb0/0x10c
 bus_for_each_drv+0x64/0xa0
 __device_attach+0xa8/0x16c
 device_initial_probe+0x10/0x20
 bus_probe_device+0x94/0x9c
 deferred_probe_work_func+0x80/0xb4
 process_one_work+0x200/0x3a0
 worker_thread+0x260/0x4c0
 kthread+0xd4/0xe0
 ret_from_fork+0x10/0x20
Code: 9409e911 937b7e60 8b0002a0 f9405800 (f9401005)
---[ end trace 0000000000000000 ]---

Reported-by: Daniel Golle <daniel@makrotopia.org>
Tested-by: Daniel Golle <daniel@makrotopia.org>
Fixes: cbd1f243bc41 ("net: dsa: mt7530: partially convert to phylink_pcs")
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/E1nj6FW-007WZB-5Y@rmk-PC.armlinux.org.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: generalize skb freeing deferral to per-cpu lists
Eric Dumazet [Fri, 22 Apr 2022 20:12:37 +0000 (13:12 -0700)]
net: generalize skb freeing deferral to per-cpu lists

Logic added in commit f35f821935d8 ("tcp: defer skb freeing after socket
lock is released") helped bulk TCP flows to move the cost of skbs
frees outside of critical section where socket lock was held.

But for RPC traffic, or hosts with RFS enabled, the solution is far from
being ideal.

For RPC traffic, recvmsg() has to return to user space right after
skb payload has been consumed, meaning that BH handler has no chance
to pick the skb before recvmsg() thread. This issue is more visible
with BIG TCP, as more RPC fit one skb.

For RFS, even if BH handler picks the skbs, they are still picked
from the cpu on which user thread is running.

Ideally, it is better to free the skbs (and associated page frags)
on the cpu that originally allocated them.

This patch removes the per socket anchor (sk->defer_list) and
instead uses a per-cpu list, which will hold more skbs per round.

This new per-cpu list is drained at the end of net_action_rx(),
after incoming packets have been processed, to lower latencies.

In normal conditions, skbs are added to the per-cpu list with
no further action. In the (unlikely) cases where the cpu does not
run net_action_rx() handler fast enough, we use an IPI to raise
NET_RX_SOFTIRQ on the remote cpu.

Also, we do not bother draining the per-cpu list from dev_cpu_dead()
This is because skbs in this list have no requirement on how fast
they should be freed.

Note that we can add in the future a small per-cpu cache
if we see any contention on sd->defer_lock.

Tested on a pair of hosts with 100Gbit NIC, RFS enabled,
and /proc/sys/net/ipv4/tcp_rmem[2] tuned to 16MB to work around
page recycling strategy used by NIC driver (its page pool capacity
being too small compared to number of skbs/pages held in sockets
receive queues)

Note that this tuning was only done to demonstrate worse
conditions for skb freeing for this particular test.
These conditions can happen in more general production workload.

10 runs of one TCP_STREAM flow

Before:
Average throughput: 49685 Mbit.

Kernel profiles on cpu running user thread recvmsg() show high cost for
skb freeing related functions (*)

    57.81%  [kernel]       [k] copy_user_enhanced_fast_string
(*) 12.87%  [kernel]       [k] skb_release_data
(*)  4.25%  [kernel]       [k] __free_one_page
(*)  3.57%  [kernel]       [k] __list_del_entry_valid
     1.85%  [kernel]       [k] __netif_receive_skb_core
     1.60%  [kernel]       [k] __skb_datagram_iter
(*)  1.59%  [kernel]       [k] free_unref_page_commit
(*)  1.16%  [kernel]       [k] __slab_free
     1.16%  [kernel]       [k] _copy_to_iter
(*)  1.01%  [kernel]       [k] kfree
(*)  0.88%  [kernel]       [k] free_unref_page
     0.57%  [kernel]       [k] ip6_rcv_core
     0.55%  [kernel]       [k] ip6t_do_table
     0.54%  [kernel]       [k] flush_smp_call_function_queue
(*)  0.54%  [kernel]       [k] free_pcppages_bulk
     0.51%  [kernel]       [k] llist_reverse_order
     0.38%  [kernel]       [k] process_backlog
(*)  0.38%  [kernel]       [k] free_pcp_prepare
     0.37%  [kernel]       [k] tcp_recvmsg_locked
(*)  0.37%  [kernel]       [k] __list_add_valid
     0.34%  [kernel]       [k] sock_rfree
     0.34%  [kernel]       [k] _raw_spin_lock_irq
(*)  0.33%  [kernel]       [k] __page_cache_release
     0.33%  [kernel]       [k] tcp_v6_rcv
(*)  0.33%  [kernel]       [k] __put_page
(*)  0.29%  [kernel]       [k] __mod_zone_page_state
     0.27%  [kernel]       [k] _raw_spin_lock

After patch:
Average throughput: 73076 Mbit.

Kernel profiles on cpu running user thread recvmsg() looks better:

    81.35%  [kernel]       [k] copy_user_enhanced_fast_string
     1.95%  [kernel]       [k] _copy_to_iter
     1.95%  [kernel]       [k] __skb_datagram_iter
     1.27%  [kernel]       [k] __netif_receive_skb_core
     1.03%  [kernel]       [k] ip6t_do_table
     0.60%  [kernel]       [k] sock_rfree
     0.50%  [kernel]       [k] tcp_v6_rcv
     0.47%  [kernel]       [k] ip6_rcv_core
     0.45%  [kernel]       [k] read_tsc
     0.44%  [kernel]       [k] _raw_spin_lock_irqsave
     0.37%  [kernel]       [k] _raw_spin_lock
     0.37%  [kernel]       [k] native_irq_return_iret
     0.33%  [kernel]       [k] __inet6_lookup_established
     0.31%  [kernel]       [k] ip6_protocol_deliver_rcu
     0.29%  [kernel]       [k] tcp_rcv_established
     0.29%  [kernel]       [k] llist_reverse_order

v2: kdoc issue (kernel bots)
    do not defer if (alloc_cpu == smp_processor_id()) (Paolo)
    replace the sk_buff_head with a single-linked list (Jakub)
    add a READ_ONCE()/WRITE_ONCE() for the lockless read of sd->defer_list

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Link: https://lore.kernel.org/r/20220422201237.416238-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge branch 'Teach libbpf to "fix up" BPF verifier log'
Alexei Starovoitov [Tue, 26 Apr 2022 22:41:47 +0000 (15:41 -0700)]
Merge branch 'Teach libbpf to "fix up" BPF verifier log'

Andrii Nakryiko says:

====================

This patch set teaches libbpf to enhance BPF verifier log with human-readable
and relevant information about failed CO-RE relocation. Patch #9 is the main
one with the new logic. See relevant commit messages for some more details.

All the other patches are either fixing various bugs detected
while working on this feature, most prominently a bug with libbpf not handling
CO-RE relocations for SEC("?...") programs, or are refactoring libbpf
internals to allow for easier reuse of CO-RE relo lookup and formatting logic.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Add libbpf's log fixup logic selftests
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:11 +0000 (17:45 -0700)]
selftests/bpf: Add libbpf's log fixup logic selftests

Add tests validating that libbpf is indeed patching up BPF verifier log
with CO-RE relocation details. Also test partial and full truncation
scenarios.

This test might be a bit fragile due to changing BPF verifier log
format. If that proves to be frequently breaking, we can simplify tests
or remove the truncation subtests. But for now it seems useful to test
it in those conditions that are otherwise rarely occuring in practice.

Also test CO-RE relo failure in a subprog as that excercises subprogram CO-RE
relocation mapping logic which doesn't work out of the box without extra
relo storage previously done only for gen_loader case.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-11-andrii@kernel.org
2 years agolibbpf: Fix up verifier log for unguarded failed CO-RE relos
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:10 +0000 (17:45 -0700)]
libbpf: Fix up verifier log for unguarded failed CO-RE relos

Teach libbpf to post-process BPF verifier log on BPF program load
failure and detect known error patterns to provide user with more
context.

Currently there is one such common situation: an "unguarded" failed BPF
CO-RE relocation. While failing CO-RE relocation is expected, it is
expected to be property guarded in BPF code such that BPF verifier
always eliminates BPF instructions corresponding to such failed CO-RE
relos as dead code. In cases when user failed to take such precautions,
BPF verifier provides the best log it can:

  123: (85) call unknown#195896080
  invalid func unknown#195896080

Such incomprehensible log error is due to libbpf "poisoning" BPF
instruction that corresponds to failed CO-RE relocation by replacing it
with invalid `call 0xbad2310` instruction (195896080 == 0xbad2310 reads
"bad relo" if you squint hard enough).

Luckily, libbpf has all the necessary information to look up CO-RE
relocation that failed and provide more human-readable description of
what's going on:

  5: <invalid CO-RE relocation>
  failed to resolve CO-RE relocation <byte_off> [6] struct task_struct___bad.fake_field_subprog (0:2 @ offset 8)

This hopefully makes it much easier to understand what's wrong with
user's BPF program without googling magic constants.

This BPF verifier log fixup is setup to be extensible and is going to be
used for at least one other upcoming feature of libbpf in follow up patches.
Libbpf is parsing lines of BPF verifier log starting from the very end.
Currently it processes up to 10 lines of code looking for familiar
patterns. This avoids wasting lots of CPU processing huge verifier logs
(especially for log_level=2 verbosity level). Actual verification error
should normally be found in last few lines, so this should work
reliably.

If libbpf needs to expand log beyond available log_buf_size, it
truncates the end of the verifier log. Given verifier log normally ends
with something like:

  processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0

... truncating this on program load error isn't too bad (end user can
always increase log size, if it needs to get complete log).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-10-andrii@kernel.org
2 years agolibbpf: Simplify bpf_core_parse_spec() signature
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:09 +0000 (17:45 -0700)]
libbpf: Simplify bpf_core_parse_spec() signature

Simplify bpf_core_parse_spec() signature to take struct bpf_core_relo as
an input instead of requiring callers to decompose them into type_id,
relo, spec_str, etc. This makes using and reusing this helper easier.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-9-andrii@kernel.org
2 years agolibbpf: Refactor CO-RE relo human description formatting routine
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:08 +0000 (17:45 -0700)]
libbpf: Refactor CO-RE relo human description formatting routine

Refactor how CO-RE relocation is formatted. Now it dumps human-readable
representation, currently used by libbpf in either debug or error
message output during CO-RE relocation resolution process, into provided
buffer. This approach allows for better reuse of this functionality
outside of CO-RE relocation resolution, which we'll use in next patch
for providing better error message for BPF verifier rejecting BPF
program due to unguarded failed CO-RE relocation.

It also gets rid of annoying "stitching" of libbpf_print() calls, which
was the only place where we did this.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-8-andrii@kernel.org
2 years agolibbpf: Record subprog-resolved CO-RE relocations unconditionally
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:07 +0000 (17:45 -0700)]
libbpf: Record subprog-resolved CO-RE relocations unconditionally

Previously, libbpf recorded CO-RE relocations with insns_idx resolved
according to finalized subprog locations (which are appended at the end
of entry BPF program) to simplify the job of light skeleton generator.

This is necessary because once subprogs' instructions are appended to
main entry BPF program all the subprog instruction indices are shifted
and that shift is different for each entry (main) BPF program, so it's
generally impossible to map final absolute insn_idx of the finalized BPF
program to their original locations inside subprograms.

This information is now going to be used not only during light skeleton
generation, but also to map absolute instruction index to subprog's
instruction and its corresponding CO-RE relocation. So start recording
these relocations always, not just when obj->gen_loader is set.

This information is going to be freed at the end of bpf_object__load()
step, as before (but this can change in the future if there will be
a need for this information post load step).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-7-andrii@kernel.org
2 years agoselftests/bpf: Add CO-RE relos and SEC("?...") to linked_funcs selftests
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:06 +0000 (17:45 -0700)]
selftests/bpf: Add CO-RE relos and SEC("?...") to linked_funcs selftests

Enhance linked_funcs selftest with two tricky features that might not
obviously work correctly together. We add CO-RE relocations to entry BPF
programs and mark those programs as non-autoloadable with SEC("?...")
annotation. This makes sure that libbpf itself handles .BTF.ext CO-RE
relocation data matching correctly for SEC("?...") programs, as well as
ensures that BPF static linker handles this correctly (this was the case
before, no changes are necessary, but it wasn't explicitly tested).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-6-andrii@kernel.org
2 years agolibbpf: Avoid joining .BTF.ext data with BPF programs by section name
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:05 +0000 (17:45 -0700)]
libbpf: Avoid joining .BTF.ext data with BPF programs by section name

Instead of using ELF section names as a joining key between .BTF.ext and
corresponding BPF programs, pre-build .BTF.ext section number to ELF
section index mapping during bpf_object__open() and use it later for
matching .BTF.ext information (func/line info or CO-RE relocations) to
their respective BPF programs and subprograms.

This simplifies corresponding joining logic and let's libbpf do
manipulations with BPF program's ELF sections like dropping leading '?'
character for non-autoloaded programs. Original joining logic in
bpf_object__relocate_core() (see relevant comment that's now removed)
was never elegant, so it's a good improvement regardless. But it also
avoids unnecessary internal assumptions about preserving original ELF
section name as BPF program's section name (which was broken when
SEC("?abc") support was added).

Fixes: a3820c481112 ("libbpf: Support opting out from autoloading BPF programs declaratively")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-5-andrii@kernel.org
2 years agolibbpf: Fix logic for finding matching program for CO-RE relocation
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:04 +0000 (17:45 -0700)]
libbpf: Fix logic for finding matching program for CO-RE relocation

Fix the bug in bpf_object__relocate_core() which can lead to finding
invalid matching BPF program when processing CO-RE relocation. IF
matching program is not found, last encountered program will be assumed
to be correct program and thus error detection won't detect the problem.

Fixes: 9c82a63cf370 ("libbpf: Fix CO-RE relocs against .text section")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-4-andrii@kernel.org
2 years agolibbpf: Drop unhelpful "program too large" guess
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:03 +0000 (17:45 -0700)]
libbpf: Drop unhelpful "program too large" guess

libbpf pretends it knows actual limit of BPF program instructions based
on UAPI headers it compiled with. There is neither any guarantee that
UAPI headers match host kernel, nor BPF verifier actually uses
BPF_MAXINSNS constant anymore. Just drop unhelpful "guess", BPF verifier
will emit actual reason for failure in its logs anyways.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-3-andrii@kernel.org
2 years agolibbpf: Fix anonymous type check in CO-RE logic
Andrii Nakryiko [Tue, 26 Apr 2022 00:45:02 +0000 (17:45 -0700)]
libbpf: Fix anonymous type check in CO-RE logic

Use type name for checking whether CO-RE relocation is referring to
anonymous type. Using spec string makes no sense.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220426004511.2691730-2-andrii@kernel.org
2 years agobpf: Compute map_btf_id during build time
Menglong Dong [Mon, 25 Apr 2022 13:32:47 +0000 (21:32 +0800)]
bpf: Compute map_btf_id during build time

For now, the field 'map_btf_id' in 'struct bpf_map_ops' for all map
types are computed during vmlinux-btf init:

  btf_parse_vmlinux() -> btf_vmlinux_map_ids_init()

It will lookup the btf_type according to the 'map_btf_name' field in
'struct bpf_map_ops'. This process can be done during build time,
thanks to Jiri's resolve_btfids.

selftest of map_ptr has passed:

  $96 map_ptr:OK
  Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Menglong Dong <imagedong@tencent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agonet: usb: qmi_wwan: add support for Sierra Wireless EM7590
Ethan Yang [Mon, 25 Apr 2022 05:40:28 +0000 (13:40 +0800)]
net: usb: qmi_wwan: add support for Sierra Wireless EM7590

add support for Sierra Wireless EM7590 0xc081 composition.

Signed-off-by: Ethan Yang <etyang@sierrawireless.com>
Acked-by: Bjørn Mork <bjorn@mork.no>
Link: https://lore.kernel.org/r/20220425054028.5444-1-etyang@sierrawireless.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2 years agonet/af_packet: add VLAN support for AF_PACKET SOCK_RAW GSO
Hangbin Liu [Mon, 25 Apr 2022 01:45:02 +0000 (09:45 +0800)]
net/af_packet: add VLAN support for AF_PACKET SOCK_RAW GSO

Currently, the kernel drops GSO VLAN tagged packet if it's created with
socket(AF_PACKET, SOCK_RAW, 0) plus virtio_net_hdr.

The reason is AF_PACKET doesn't adjust the skb network header if there is
a VLAN tag. Then after virtio_net_hdr_set_proto() called, the skb->protocol
will be set to ETH_P_IP/IPv6. And in later inet/ipv6_gso_segment() the skb
is dropped as network header position is invalid.

Let's handle VLAN packets by adjusting network header position in
packet_parse_headers(). The adjustment is safe and does not affect the
later xmit as tap device also did that.

In packet_snd(), packet_parse_headers() need to be moved before calling
virtio_net_hdr_set_proto(), so we can set correct skb->protocol and
network header first.

There is no need to update tpacket_snd() as it calls packet_parse_headers()
in tpacket_fill_skb(), which is already before calling virtio_net_hdr_*
functions.

skb->no_fcs setting is also moved upper to make all skb settings together
and keep consistency with function packet_sendmsg_spkt().

Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://lore.kernel.org/r/20220425014502.985464-1-liuhangbin@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>