platform/kernel/linux-starfive.git
17 months agolibbpf: Selftests for resizing datasec maps
JP Kobryn [Wed, 24 May 2023 00:45:37 +0000 (17:45 -0700)]
libbpf: Selftests for resizing datasec maps

This patch adds test coverage for resizing datasec maps. The first two
subtests resize the bss and custom data sections. In both cases, an
initial array (of length one) has its element set to one. After resizing
the rest of the array is filled with ones as well. A BPF program is then
run to sum the respective arrays and back on the userspace side the sum
is checked to be equal to the number of elements.
The third subtest attempts to perform resizing under conditions that
will result in either the resize failing or the BTF info being cleared.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20230524004537.18614-3-inwardvessel@gmail.com
17 months agolibbpf: Add capability for resizing datasec maps
JP Kobryn [Wed, 24 May 2023 00:45:36 +0000 (17:45 -0700)]
libbpf: Add capability for resizing datasec maps

This patch updates bpf_map__set_value_size() so that if the given map is
memory mapped, it will attempt to resize the mapped region. Initial
contents of the mapped region are preserved. BTF is not required, but
after the mapping is resized an attempt is made to adjust the associated
BTF information if the following criteria is met:
 - BTF info is present
 - the map is a datasec
 - the final variable in the datasec is an array

... the resulting BTF info will be updated so that the final array
variable is associated with a new BTF array type sized to cover the
requested size.

Note that the initial resizing of the memory mapped region can succeed
while the subsequent BTF adjustment can fail. In this case, BTF info is
dropped from the map by clearing the key and value type.

Signed-off-by: JP Kobryn <inwardvessel@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20230524004537.18614-2-inwardvessel@gmail.com
17 months agoselftests/bpf: Add path_fd-based BPF_OBJ_PIN and BPF_OBJ_GET tests
Andrii Nakryiko [Tue, 23 May 2023 17:00:13 +0000 (10:00 -0700)]
selftests/bpf: Add path_fd-based BPF_OBJ_PIN and BPF_OBJ_GET tests

Add a selftest demonstrating using detach-mounted BPF FS using new mount
APIs, and pinning and getting BPF map using such mount. This
demonstrates how something like container manager could setup BPF FS,
pin and adjust all the necessary objects in it, all before exposing BPF
FS to a particular mount namespace.

Also add a few subtests validating all meaningful combinations of
path_fd and pathname. We use mounted /sys/fs/bpf location for these.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230523170013.728457-5-andrii@kernel.org
17 months agolibbpf: Add opts-based bpf_obj_pin() API and add support for path_fd
Andrii Nakryiko [Tue, 16 May 2023 00:01:58 +0000 (17:01 -0700)]
libbpf: Add opts-based bpf_obj_pin() API and add support for path_fd

Add path_fd support for bpf_obj_pin() and bpf_obj_get() operations
(through their opts-based variants). This allows to take advantage of
new kernel-side support for O_PATH-based pin/get location specification.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230523170013.728457-4-andrii@kernel.org
17 months agobpf: Support O_PATH FDs in BPF_OBJ_PIN and BPF_OBJ_GET commands
Andrii Nakryiko [Mon, 15 May 2023 23:48:06 +0000 (16:48 -0700)]
bpf: Support O_PATH FDs in BPF_OBJ_PIN and BPF_OBJ_GET commands

Current UAPI of BPF_OBJ_PIN and BPF_OBJ_GET commands of bpf() syscall
forces users to specify pinning location as a string-based absolute or
relative (to current working directory) path. This has various
implications related to security (e.g., symlink-based attacks), forces
BPF FS to be exposed in the file system, which can cause races with
other applications.

One of the feedbacks we got from folks working with containers heavily
was that inability to use purely FD-based location specification was an
unfortunate limitation and hindrance for BPF_OBJ_PIN and BPF_OBJ_GET
commands. This patch closes this oversight, adding path_fd field to
BPF_OBJ_PIN and BPF_OBJ_GET UAPI, following conventions established by
*at() syscalls for dirfd + pathname combinations.

This now allows interesting possibilities like working with detached BPF
FS mount (e.g., to perform multiple pinnings without running a risk of
someone interfering with them), and generally making pinning/getting
more secure and not prone to any races and/or security attacks.

This is demonstrated by a selftest added in subsequent patch that takes
advantage of new mount APIs (fsopen, fsconfig, fsmount) to demonstrate
creating detached BPF FS mount, pinning, and then getting BPF map out of
it, all while never exposing this private instance of BPF FS to outside
worlds.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Christian Brauner <brauner@kernel.org>
Link: https://lore.kernel.org/bpf/20230523170013.728457-4-andrii@kernel.org
17 months agolibbpf: Start v1.3 development cycle
Andrii Nakryiko [Tue, 23 May 2023 17:00:11 +0000 (10:00 -0700)]
libbpf: Start v1.3 development cycle

Bump libbpf.map to v1.3.0 to start a new libbpf version cycle.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230523170013.728457-3-andrii@kernel.org
17 months agobpf: Validate BPF object in BPF_OBJ_PIN before calling LSM
Andrii Nakryiko [Mon, 22 May 2023 23:29:14 +0000 (16:29 -0700)]
bpf: Validate BPF object in BPF_OBJ_PIN before calling LSM

Do a sanity check whether provided file-to-be-pinned is actually a BPF
object (prog, map, btf) before calling security_path_mknod LSM hook. If
it's not, LSM hook doesn't have to be triggered, as the operation has no
chance of succeeding anyways.

Suggested-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Christian Brauner <brauner@kernel.org>
Link: https://lore.kernel.org/bpf/20230522232917.2454595-2-andrii@kernel.org
17 months agobpftool: Specify XDP Hints ifname when loading program
Larysa Zaremba [Wed, 17 May 2023 16:01:04 +0000 (18:01 +0200)]
bpftool: Specify XDP Hints ifname when loading program

Add ability to specify a network interface used to resolve XDP hints
kfuncs when loading program through bpftool.

Usage:

  bpftool prog load [...] xdpmeta_dev <ifname>

Writing just 'dev <ifname>' instead of 'xdpmeta_dev' is a very probable
mistake that results in not very descriptive errors,
so 'bpftool prog load [...] dev <ifname>' syntax becomes deprecated,
followed by 'bpftool map create [...] dev <ifname>' for consistency.

Now, to offload program, execute:

  bpftool prog load [...] offload_dev <ifname>

To offload map:

  bpftool map create [...] offload_dev <ifname>

'dev <ifname>' still performs offloading in the commands above, but now
triggers a warning and is excluded from bash completion.

'xdpmeta_dev' and 'offload_dev' are mutually exclusive options, because
'xdpmeta_dev' basically makes a program device-bound without loading it
onto the said device. For now, offloaded programs cannot use XDP hints [0],
but if this changes, using 'offload_dev <ifname>' should cover this case.

  [0] https://lore.kernel.org/bpf/a5a636cc-5b03-686f-4be0-000383b05cfc@linux.dev

Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20230517160103.1088185-1-larysa.zaremba@intel.com
17 months agoselftests/bpf: Add xdp_feature selftest for bond device
Lorenzo Bianconi [Wed, 17 May 2023 13:41:33 +0000 (15:41 +0200)]
selftests/bpf: Add xdp_feature selftest for bond device

Introduce selftests to check xdp_feature support for bond driver.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jussi Maki <joamaki@gmail.com>
Link: https://lore.kernel.org/bpf/64cb8f20e6491f5b971f8d3129335093c359aad7.1684329998.git.lorenzo@kernel.org
17 months agoMerge branch 'bpf: Add socket destroy capability'
Martin KaFai Lau [Sat, 20 May 2023 00:45:47 +0000 (17:45 -0700)]
Merge branch 'bpf: Add socket destroy capability'

Aditi Ghag says:

====================

This patch set adds the capability to destroy sockets in BPF. We plan to
use the capability in Cilium to force client sockets to reconnect when
their remote load-balancing backends are deleted. The other use case is
on-the-fly policy enforcement where existing socket connections
prevented by policies need to be terminated.

The use cases, and more details around
the selected approach were presented at LPC 2022 -
https://lpc.events/event/16/contributions/1358/.
RFC discussion -
https://lore.kernel.org/netdev/CABG=zsBEh-P4NXk23eBJw7eajB5YJeRS7oPXnTAzs=yob4EMoQ@mail.gmail.com/T/#u.
v8 patch series -
https://lore.kernel.org/bpf/20230517175359.527917-1-aditi.ghag@isovalent.com/

v9 highlights:
Address review comments:
Martin:
- Rearranged the kfunc filter patch, and added the missing break
  statement.
- Squashed the extended selftest/bpf patch.
Yonghong:
- Revised commit message for patch 1.

(Below notes are same as v8 patch series that are still relevant. Refer to
earlier patch series versions for other notes.)
- I hit a snag while writing the kfunc where verifier complained about the
  `sock_common` type passed from TCP iterator. With kfuncs, there don't
  seem to be any options available to pass BTF type hints to the verifier
  (equivalent of `ARG_PTR_TO_BTF_ID_SOCK_COMMON`, as was the case with the
  helper).  As a result, I changed the argument type of the sock_destory
  kfunc to `sock_common`.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agoselftests/bpf: Test bpf_sock_destroy
Aditi Ghag [Fri, 19 May 2023 22:51:57 +0000 (22:51 +0000)]
selftests/bpf: Test bpf_sock_destroy

The test cases for destroying sockets mirror the intended usages of the
bpf_sock_destroy kfunc using iterators.

The destroy helpers set `ECONNABORTED` error code that we can validate
in the test code with client sockets. But UDP sockets have an overriding
error code from `disconnect()` called during abort, so the error code
validation is only done for TCP sockets.

The failure test cases validate that the `bpf_sock_destroy` kfunc is not
allowed from program attach types other than BPF trace iterator, and
such programs fail to load.

Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-10-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agoselftests/bpf: Add helper to get port using getsockname
Aditi Ghag [Fri, 19 May 2023 22:51:56 +0000 (22:51 +0000)]
selftests/bpf: Add helper to get port using getsockname

The helper will be used to programmatically retrieve
and pass ports in userspace and kernel selftest programs.

Suggested-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-9-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agobpf: Add bpf_sock_destroy kfunc
Aditi Ghag [Fri, 19 May 2023 22:51:55 +0000 (22:51 +0000)]
bpf: Add bpf_sock_destroy kfunc

The socket destroy kfunc is used to forcefully terminate sockets from
certain BPF contexts. We plan to use the capability in Cilium
load-balancing to terminate client sockets that continue to connect to
deleted backends.  The other use case is on-the-fly policy enforcement
where existing socket connections prevented by policies need to be
forcefully terminated.  The kfunc also allows terminating sockets that may
or may not be actively sending traffic.

The kfunc can currently be called only from BPF TCP and UDP iterators
where users can filter, and terminate selected sockets. More
specifically, it can only be called from  BPF contexts that ensure
socket locking in order to allow synchronous execution of protocol
specific `diag_destroy` handlers. The previous commit that batches UDP
sockets during iteration facilitated a synchronous invocation of the UDP
destroy callback from BPF context by skipping socket locks in
`udp_abort`. TCP iterator already supported batching of sockets being
iterated. To that end, `tracing_iter_filter` callback filter is added so
that verifier can restrict the kfunc to programs with `BPF_TRACE_ITER`
attach type, and reject other programs.

The kfunc takes `sock_common` type argument, even though it expects, and
casts them to a `sock` pointer. This enables the verifier to allow the
sock_destroy kfunc to be called for TCP with `sock_common` and UDP with
`sock` structs. Furthermore, as `sock_common` only has a subset of
certain fields of `sock`, casting pointer to the latter type might not
always be safe for certain sockets like request sockets, but these have a
special handling in the diag_destroy handlers.

Additionally, the kfunc is defined with `KF_TRUSTED_ARGS` flag to avoid the
cases where a `PTR_TO_BTF_ID` sk is obtained by following another pointer.
eg. getting a sk pointer (may be even NULL) by following another sk
pointer. The pointer socket argument passed in TCP and UDP iterators is
tagged as `PTR_TRUSTED` in {tcp,udp}_reg_info.  The TRUSTED arg changes
are contributed by Martin KaFai Lau <martin.lau@kernel.org>.

Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-8-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agobpf: Add kfunc filter function to 'struct btf_kfunc_id_set'
Aditi Ghag [Fri, 19 May 2023 22:51:54 +0000 (22:51 +0000)]
bpf: Add kfunc filter function to 'struct btf_kfunc_id_set'

This commit adds the ability to filter kfuncs to certain BPF program
types. This is required to limit bpf_sock_destroy kfunc implemented in
follow-up commits to programs with attach type 'BPF_TRACE_ITER'.

The commit adds a callback filter to 'struct btf_kfunc_id_set'.  The
filter has access to the `bpf_prog` construct including its properties
such as `expected_attached_type`.

Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-7-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agobpf: udp: Implement batching for sockets iterator
Aditi Ghag [Fri, 19 May 2023 22:51:53 +0000 (22:51 +0000)]
bpf: udp: Implement batching for sockets iterator

Batch UDP sockets from BPF iterator that allows for overlapping locking
semantics in BPF/kernel helpers executed in BPF programs.  This facilitates
BPF socket destroy kfunc (introduced by follow-up patches) to execute from
BPF iterator programs.

Previously, BPF iterators acquired the sock lock and sockets hash table
bucket lock while executing BPF programs. This prevented BPF helpers that
again acquire these locks to be executed from BPF iterators.  With the
batching approach, we acquire a bucket lock, batch all the bucket sockets,
and then release the bucket lock. This enables BPF or kernel helpers to
skip sock locking when invoked in the supported BPF contexts.

The batching logic is similar to the logic implemented in TCP iterator:
https://lore.kernel.org/bpf/20210701200613.1036157-1-kafai@fb.com/.

Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-6-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agoudp: seq_file: Remove bpf_seq_afinfo from udp_iter_state
Aditi Ghag [Fri, 19 May 2023 22:51:52 +0000 (22:51 +0000)]
udp: seq_file: Remove bpf_seq_afinfo from udp_iter_state

This is a preparatory commit to remove the field. The field was
previously shared between proc fs and BPF UDP socket iterators. As the
follow-up commits will decouple the implementation for the iterators,
remove the field. As for BPF socket iterator, filtering of sockets is
exepected to be done in BPF programs.

Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-5-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agobpf: udp: Encapsulate logic to get udp table
Aditi Ghag [Fri, 19 May 2023 22:51:51 +0000 (22:51 +0000)]
bpf: udp: Encapsulate logic to get udp table

This is a preparatory commit that encapsulates the logic
to get udp table in iterator inside udp_get_table_afinfo, and
renames the function to `udp_get_table_seq` accordingly.

Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-4-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agoudp: seq_file: Helper function to match socket attributes
Aditi Ghag [Fri, 19 May 2023 22:51:50 +0000 (22:51 +0000)]
udp: seq_file: Helper function to match socket attributes

This is a preparatory commit to refactor code that matches socket
attributes in iterators to a helper function, and use it in the
proc fs iterator.

Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-3-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agobpf: tcp: Avoid taking fast sock lock in iterator
Aditi Ghag [Fri, 19 May 2023 22:51:49 +0000 (22:51 +0000)]
bpf: tcp: Avoid taking fast sock lock in iterator

This is a preparatory commit to replace `lock_sock_fast` with
`lock_sock`,and facilitate BPF programs executed from the TCP sockets
iterator to be able to destroy TCP sockets using the bpf_sock_destroy
kfunc (implemented in follow-up commits).

Previously, BPF TCP iterator was acquiring the sock lock with BH
disabled. This led to scenarios where the sockets hash table bucket lock
can be acquired with BH enabled in some path versus disabled in other.
In such situation, kernel issued a warning since it thinks that in the
BH enabled path the same bucket lock *might* be acquired again in the
softirq context (BH disabled), which will lead to a potential dead lock.
Since bpf_sock_destroy also happens in a process context, the potential
deadlock warning is likely a false alarm.

Here is a snippet of annotated stack trace that motivated this change:

```

Possible interrupt unsafe locking scenario:

      CPU0                    CPU1
      ----                    ----
 lock(&h->lhash2[i].lock);
                              local_bh_disable();
                              lock(&h->lhash2[i].lock);
kernel imagined possible scenario:
  local_bh_disable();  /* Possible softirq */
  lock(&h->lhash2[i].lock);
*** Potential Deadlock ***

process context:

lock_acquire+0xcd/0x330
_raw_spin_lock+0x33/0x40
------> Acquire (bucket) lhash2.lock with BH enabled
__inet_hash+0x4b/0x210
inet_csk_listen_start+0xe6/0x100
inet_listen+0x95/0x1d0
__sys_listen+0x69/0xb0
__x64_sys_listen+0x14/0x20
do_syscall_64+0x3c/0x90
entry_SYSCALL_64_after_hwframe+0x72/0xdc

bpf_sock_destroy run from iterator:

lock_acquire+0xcd/0x330
_raw_spin_lock+0x33/0x40
------> Acquire (bucket) lhash2.lock with BH disabled
inet_unhash+0x9a/0x110
tcp_set_state+0x6a/0x210
tcp_abort+0x10d/0x200
bpf_prog_6793c5ca50c43c0d_iter_tcp6_server+0xa4/0xa9
bpf_iter_run_prog+0x1ff/0x340
------> lock_sock_fast that acquires sock lock with BH disabled
bpf_iter_tcp_seq_show+0xca/0x190
bpf_seq_read+0x177/0x450

```

Also, Yonghong reported a deadlock for non-listening TCP sockets that
this change resolves. Previously, `lock_sock_fast` held the sock spin
lock with BH which was again being acquired in `tcp_abort`:

```
watchdog: BUG: soft lockup - CPU#0 stuck for 86s! [test_progs:2331]
RIP: 0010:queued_spin_lock_slowpath+0xd8/0x500
Call Trace:
 <TASK>
 _raw_spin_lock+0x84/0x90
 tcp_abort+0x13c/0x1f0
 bpf_prog_88539c5453a9dd47_iter_tcp6_client+0x82/0x89
 bpf_iter_run_prog+0x1aa/0x2c0
 ? preempt_count_sub+0x1c/0xd0
 ? from_kuid_munged+0x1c8/0x210
 bpf_iter_tcp_seq_show+0x14e/0x1b0
 bpf_seq_read+0x36c/0x6a0

bpf_iter_tcp_seq_show
   lock_sock_fast
     __lock_sock_fast
       spin_lock_bh(&sk->sk_lock.slock);
/* * Fast path return with bottom halves disabled and * sock::sk_lock.slock held.* */

 ...
 tcp_abort
   local_bh_disable();
   spin_lock(&((sk)->sk_lock.slock)); // from bh_lock_sock(sk)

```

With the switch to `lock_sock`, it calls `spin_unlock_bh` before returning:

```
lock_sock
    lock_sock_nested
       spin_lock_bh(&sk->sk_lock.slock);
       :
       spin_unlock_bh(&sk->sk_lock.slock);
```

Acked-by: Yonghong Song <yhs@meta.com>
Acked-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Aditi Ghag <aditi.ghag@isovalent.com>
Link: https://lore.kernel.org/r/20230519225157.760788-2-aditi.ghag@isovalent.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
17 months agoMerge branch 'bpf: Show target_{obj,btf}_id for tracing link'
Alexei Starovoitov [Fri, 19 May 2023 17:06:45 +0000 (10:06 -0700)]
Merge branch 'bpf: Show target_{obj,btf}_id for tracing link'

Yafang Shao says:

====================
The target_btf_id can help us understand which kernel function is
linked by a tracing prog. The target_btf_id and target_obj_id have
already been exposed to userspace, so we just need to show them.

For some other link types like perf_event and kprobe_multi, it is not
easy to find which functions are attached either. We may support
->fill_link_info for them in the future.

v1->v2:
- Skip showing them in the plain output for the old kernels. (Quentin)
- Coding improvement. (Andrii)
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agobpftool: Show target_{obj,btf}_id in tracing link info
Yafang Shao [Wed, 17 May 2023 10:31:26 +0000 (10:31 +0000)]
bpftool: Show target_{obj,btf}_id in tracing link info

The target_btf_id can help us understand which kernel function is
linked by a tracing prog. The target_btf_id and target_obj_id have
already been exposed to userspace, so we just need to show them.

The result as follows,

$ tools/bpf/bpftool/bpftool link show
2: tracing  prog 13
        prog_type tracing  attach_type trace_fentry
        target_obj_id 1  target_btf_id 13964
        pids trace(10673)

$ tools/bpf/bpftool/bpftool link show -j
[{"id":2,"type":"tracing","prog_id":13,"prog_type":"tracing","attach_type":"trace_fentry","target_obj_id":1,"target_btf_id":13964,"pids":[{"pid":10673,"comm":"trace"}]}]

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230517103126.68372-3-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agobpf: Show target_{obj,btf}_id in tracing link fdinfo
Yafang Shao [Wed, 17 May 2023 10:31:25 +0000 (10:31 +0000)]
bpf: Show target_{obj,btf}_id in tracing link fdinfo

The target_btf_id can help us understand which kernel function is
linked by a tracing prog. The target_btf_id and target_obj_id have
already been exposed to userspace, so we just need to show them.

The result as follows,

$ cat /proc/10673/fdinfo/10
pos:    0
flags:  02000000
mnt_id: 15
ino:    2094
link_type:      tracing
link_id:        2
prog_tag:       a04f5eef06a7f555
prog_id:        13
attach_type:    24
target_obj_id:  1
target_btf_id:  13964

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230517103126.68372-2-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Make bpf_dynptr_is_rdonly() prototyype consistent with kernel
Yonghong Song [Wed, 17 May 2023 04:04:09 +0000 (21:04 -0700)]
selftests/bpf: Make bpf_dynptr_is_rdonly() prototyype consistent with kernel

Currently kernel kfunc bpf_dynptr_is_rdonly() has prototype ...

  __bpf_kfunc bool bpf_dynptr_is_rdonly(struct bpf_dynptr_kern *ptr)

... while selftests bpf_kfuncs.h has:

  extern int bpf_dynptr_is_rdonly(const struct bpf_dynptr *ptr) __ksym;

Such a mismatch might cause problems although currently it is okay in
selftests. Fix it to prevent future potential surprise.

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230517040409.4024618-1-yhs@fb.com
17 months agoselftests/bpf: Fix dynptr/test_dynptr_is_null
Yonghong Song [Wed, 17 May 2023 04:04:04 +0000 (21:04 -0700)]
selftests/bpf: Fix dynptr/test_dynptr_is_null

With latest llvm17, dynptr/test_dynptr_is_null subtest failed in my testing
VM. The failure log looks like below:

  All error logs:
  tester_init:PASS:tester_log_buf 0 nsec
  process_subtest:PASS:obj_open_mem 0 nsec
  process_subtest:PASS:Can't alloc specs array 0 nsec
  verify_success:PASS:dynptr_success__open 0 nsec
  verify_success:PASS:bpf_object__find_program_by_name 0 nsec
  verify_success:PASS:dynptr_success__load 0 nsec
  verify_success:PASS:bpf_program__attach 0 nsec
  verify_success:FAIL:err unexpected err: actual 4 != expected 0
  #65/9    dynptr/test_dynptr_is_null:FAIL

The error happens for bpf prog test_dynptr_is_null in dynptr_success.c:

        if (bpf_dynptr_is_null(&ptr2)) {
                err = 4;
                goto exit;
        }

The bpf_dynptr_is_null(&ptr) unexpectedly returned a non-zero value and
the control went to the error path. Digging further, I found the root cause
is due to function signature difference between kernel and user space.

In kernel, we have ...

  __bpf_kfunc bool bpf_dynptr_is_null(struct bpf_dynptr_kern *ptr)

... while in bpf_kfuncs.h we have:

  extern int bpf_dynptr_is_null(const struct bpf_dynptr *ptr) __ksym;

The kernel bpf_dynptr_is_null disasm code:

  ffffffff812f1a90 <bpf_dynptr_is_null>:
  ffffffff812f1a90: f3 0f 1e fa           endbr64
  ffffffff812f1a94: 0f 1f 44 00 00        nopl    (%rax,%rax)
  ffffffff812f1a99: 53                    pushq   %rbx
  ffffffff812f1a9a: 48 89 fb              movq    %rdi, %rbx
  ffffffff812f1a9d: e8 ae 29 17 00        callq   0xffffffff81464450 <__asan_load8_noabort>
  ffffffff812f1aa2: 48 83 3b 00           cmpq    $0x0, (%rbx)
  ffffffff812f1aa6: 0f 94 c0              sete    %al
  ffffffff812f1aa9: 5b                    popq    %rbx
  ffffffff812f1aaa: c3                    retq

Note that only 1-byte register %al is set and the other 7-bytes are not
touched. In bpf program, the asm code for the above bpf_dynptr_is_null(&ptr2):

       266:       85 10 00 00 ff ff ff ff call -0x1
       267:       b4 01 00 00 04 00 00 00 w1 = 0x4
       268:       16 00 03 00 00 00 00 00 if w0 == 0x0 goto +0x3 <LBB9_8>

Basically, 4-byte subregister is tested. This might cause error as the value
other than the lowest byte might not be 0.

This patch fixed the issue by using the identical func prototype across kernel
and selftest user space. The fixed bpf asm code:

       267:       85 10 00 00 ff ff ff ff call -0x1
       268:       54 00 00 00 01 00 00 00 w0 &= 0x1
       269:       b4 01 00 00 04 00 00 00 w1 = 0x4
       270:       16 00 03 00 00 00 00 00 if w0 == 0x0 goto +0x3 <LBB9_8>

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20230517040404.4023912-1-yhs@fb.com
17 months agobpf, docs: Shift operations are defined to use a mask
Dave Thaler [Tue, 9 May 2023 18:08:45 +0000 (18:08 +0000)]
bpf, docs: Shift operations are defined to use a mask

Update the documentation regarding shift operations to explain the
use of a mask, since otherwise shifting by a value out of range
(like negative) is undefined.

Signed-off-by: Dave Thaler <dthaler@microsoft.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20230509180845.1236-1-dthaler1968@googlemail.com
17 months agobpftool: Support bpffs mountpoint as pin path for prog loadall
Pengcheng Yang [Sat, 6 May 2023 03:07:19 +0000 (11:07 +0800)]
bpftool: Support bpffs mountpoint as pin path for prog loadall

Currently, when using prog loadall and the pin path is a bpffs mountpoint,
bpffs will be repeatedly mounted to the parent directory of the bpffs
mountpoint path. For example, a `bpftool prog loadall test.o /sys/fs/bpf`
will trigger this.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/1683342439-3677-1-git-send-email-yangpc@wangsu.com
17 months agoselftests/bpf: Do not use sign-file as testcase
Alexey Gladkov [Wed, 17 May 2023 09:49:46 +0000 (11:49 +0200)]
selftests/bpf: Do not use sign-file as testcase

The sign-file utility (from scripts/) is used in prog_tests/verify_pkcs7_sig.c,
but the utility should not be called as a test. Executing this utility produces
the following error:

  selftests: /linux/tools/testing/selftests/bpf: urandom_read
  ok 16 selftests: /linux/tools/testing/selftests/bpf: urandom_read

  selftests: /linux/tools/testing/selftests/bpf: sign-file
  not ok 17 selftests: /linux/tools/testing/selftests/bpf: sign-file # exit=2

Also, urandom_read is mistakenly used as a test. It does not lead to an error,
but should be moved over to TEST_GEN_FILES as well. The empty TEST_CUSTOM_PROGS
can then be removed.

Fixes: fc97590668ae ("selftests/bpf: Add test for bpf_verify_pkcs7_signature() kfunc")
Signed-off-by: Alexey Gladkov <legion@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Roberto Sassu <roberto.sassu@huawei.com>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/ZEuWFk3QyML9y5QQ@example.org
Link: https://lore.kernel.org/bpf/88e3ab23029d726a2703adcf6af8356f7a2d3483.1684316821.git.legion@kernel.org
17 months agobpf: drop unnecessary user-triggerable WARN_ONCE in verifierl log
Andrii Nakryiko [Tue, 16 May 2023 18:04:09 +0000 (11:04 -0700)]
bpf: drop unnecessary user-triggerable WARN_ONCE in verifierl log

It's trivial for user to trigger "verifier log line truncated" warning,
as verifier has a fixed-sized buffer of 1024 bytes (as of now), and there are at
least two pieces of user-provided information that can be output through
this buffer, and both can be arbitrarily sized by user:
  - BTF names;
  - BTF.ext source code lines strings.

Verifier log buffer should be properly sized for typical verifier state
output. But it's sort-of expected that this buffer won't be long enough
in some circumstances. So let's drop the check. In any case code will
work correctly, at worst truncating a part of a single line output.

Reported-by: syzbot+8b2a08dfbd25fd933d75@syzkaller.appspotmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230516180409.3549088-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoMerge branch 'seltests/xsk: prepare for AF_XDP multi-buffer testing'
Alexei Starovoitov [Wed, 17 May 2023 05:31:52 +0000 (22:31 -0700)]
Merge branch 'seltests/xsk: prepare for AF_XDP multi-buffer testing'

Magnus Karlsson says:

====================
Prepare the AF_XDP selftests test framework code for the upcoming
multi-buffer support in AF_XDP. This so that the multi-buffer patch
set does not become way too large. In that upcoming patch set, we are
only including the multi-buffer tests together with any framework
code that depends on the new options bit introduced in the AF_XDP
multi-buffer implementation itself.

Currently, the test framework is based on the premise that a packet
consists of a single fragment and thus occupies a single buffer and a
single descriptor. Multi-buffer breaks this assumption, as that is the
whole purpose of it. Now, a packet can consist of multiple buffers and
therefore consume multiple descriptors.

The patch set starts with some clean-ups and simplifications followed
by patches that make sure that the current code works even when a
packet occupies multiple buffers. The actual code for sending and
receiving multi-buffer packets will be included in the AF_XDP
multi-buffer patch set as it depends on a new bit being used in the
options field of the descriptor.

Patch set anatomy:
1: The XDP program was unnecessarily changed many times. Fixes this.

2: There is no reason to generate a full UDP/IPv4 packet as it is
   never used. Simplify the code by just generating a valid Ethernet
   frame.

3: Introduce a more complicated payload pattern that can detect
   fragments out of bounds in a multi-buffer packet and other errors
   found in single-fragment packets.

4: As a convenience, dump the content of the faulty packet at error.

5: To simplify the code, make the usage of the packet stream for Tx
   and Rx more similar.

6: Store the offset of the packet in the buffer in the struct pkt
   definition instead of the address in the umem itself and introduce
   a simple buffer allocator. The address only made sense when all
   packets consumed a single buffer. Now, we do not know beforehand
   how many buffers a packet will consume, so we instead just allocate
   a buffer from the allocator and specify the offset within that
   buffer.

7: Test for huge pages only once instead of before each test that needs it.

8: Populate the fill ring based on how many frags are needed for each
   packet.

9: Change the data generation code so it can generate data for
   multi-buffer packets too.

10: Adjust the packet pacing algorithm so that it can cope with
    multi-buffer packets. The pacing algorithm is present so that Tx
    does not send too many packets/frames to Rx that it starts to drop
    packets. That would ruin the tests.

v1 -> v2:
* Fixed spelling error in patch #6 [Simon]
* Fixed compilation error with llvm in patch #7 [Daniel]

Thanks: Magnus
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: adjust packet pacing for multi-buffer support
Magnus Karlsson [Tue, 16 May 2023 10:31:09 +0000 (12:31 +0200)]
selftests/xsk: adjust packet pacing for multi-buffer support

Modify the packet pacing algorithm so that it works with multi-buffer
packets. This algorithm makes sure we do not send too many buffers to
the receiving thread so that packets have to be dropped. The previous
algorithm made the assumption that each packet only consumes one
buffer, but that is not true anymore when multi-buffer support gets
added. Instead, we find out what the largest packet size is in the
packet stream and assume that each packet will consume this many
buffers. This is conservative and overly cautious as there might be
smaller packets in the stream that need fewer buffers per packet. But
it keeps the algorithm simple.

Also simplify it by removing the pthread conditional and just test if
there is enough space in the Rx thread before trying to send one more
batch. Also makes the tests run faster.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-11-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: generate data for multi-buffer packets
Magnus Karlsson [Tue, 16 May 2023 10:31:08 +0000 (12:31 +0200)]
selftests/xsk: generate data for multi-buffer packets

Add the ability to generate data in the packets that are correct for
multi-buffer packets. The ethernet header should only go into the
first fragment followed by data and the others should only have
data. We also need to modify the pkt_dump function so that it knows
what fragment has an ethernet header so it can print this.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-10-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: populate fill ring based on frags needed
Magnus Karlsson [Tue, 16 May 2023 10:31:07 +0000 (12:31 +0200)]
selftests/xsk: populate fill ring based on frags needed

Populate the fill ring based on the number of frags a packet
needs. With multi-buffer support, a packet might require more than a
single fragment/buffer, so the function xsk_populate_fill_ring() needs
to consider how many buffers a packet will consume, and put that many
buffers on the fill ring for each packet it should receive. As we are
still not sending any multi-buffer packets, the function will only
produce one buffer per packet at the moment.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-9-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsx: test for huge pages only once
Magnus Karlsson [Tue, 16 May 2023 10:31:06 +0000 (12:31 +0200)]
selftests/xsx: test for huge pages only once

Test for hugepages only once at the beginning of the execution of the
whole test suite, instead of before each test that needs huge
pages. These are the tests that use unaligned mode. As more unaligned
tests will be added, so the current system just does not scale.

With this change, there are now three possible outcomes of a test run:
fail, pass, or skip. To simplify the handling of this, the function
testapp_validate_traffic() now returns this value to the main loop. As
this function is used by nearly all tests, it meant a small change to
most of them.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-8-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: store offset in pkt instead of addr
Magnus Karlsson [Tue, 16 May 2023 10:31:05 +0000 (12:31 +0200)]
selftests/xsk: store offset in pkt instead of addr

Store the offset in struct pkt instead of the address. This is
important since address is only meaningful in the context of a packet
that is stored in a single umem buffer and thus a single Tx
descriptor. If the packet, in contrast need to be represented by
multiple buffers in the umem, storing the address makes no sense since
the packet will consist of multiple buffers in the umem at various
addresses. This change is in preparation for the upcoming
multi-buffer support in AF_XDP and the corresponding tests.

So instead of indicating the address, we instead indicate the offset
of the packet in the first buffer. The actual address of the buffer is
allocated from the umem with a new function called
umem_alloc_buffer(). This also means we can get rid of the
use_fill_for_addr flag as the addresses fed into the fill ring will
always be the offset from the pkt specification in the packet stream
plus the address of the allocated buffer from the umem. No special
casing needed.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-7-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: add packet iterator for tx to packet stream
Magnus Karlsson [Tue, 16 May 2023 10:31:04 +0000 (12:31 +0200)]
selftests/xsk: add packet iterator for tx to packet stream

Convert the current variable rx_pkt_nb to an iterator that can be used
for both Rx and Tx. This to simplify the code and making Tx more like
Rx that already has this feature.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-6-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: dump packet at error
Magnus Karlsson [Tue, 16 May 2023 10:31:03 +0000 (12:31 +0200)]
selftests/xsk: dump packet at error

Dump the content of the packet when a test finds that packets are
received out of order, the length is wrong, or some other packet
error. Use the already existing pkt_dump function for this and call it
when the above errors are detected. Get rid of the command line option
for dumping packets as it is not useful to print out thousands of
good packets followed by the faulty one you would like to see.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-5-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: add varying payload pattern within packet
Magnus Karlsson [Tue, 16 May 2023 10:31:02 +0000 (12:31 +0200)]
selftests/xsk: add varying payload pattern within packet

Add a varying payload pattern within the packet. Instead of having
just a packet number that is the same for all words in a packet, make
each word different in the packet. The upper 16-bits are set to the
packet number and the lower 16-bits are the sequence number of the
words in this packet. So the 3rd packet's 5th 32-bit word of data will
contain the number (2<<32) | 4 as they are numbered from 0.

This will make it easier to detect fragments that are out of order
when starting to test multi-buffer support.

The member payload in the packet is renamed pkt_nb to reflect that it
is now only a pkt_nb, not the real payload as seen above.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-4-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: generate simpler packets with variable length
Magnus Karlsson [Tue, 16 May 2023 10:31:01 +0000 (12:31 +0200)]
selftests/xsk: generate simpler packets with variable length

Implement support for generating pkts with variable length. Before
this patch, they were all 64 bytes, exception for some packets of zero
length and some that were too large. This feature will be used to test
multi-buffer support for which large packets are needed.

The packets are also made simpler, just a valid Ethernet header
followed by a sequence number. This so that it will become easier to
implement packet generation when each packet consists of multiple
fragments. There is also a maintenance burden associated with carrying
all this code for generating proper UDP/IP packets, especially since
they are not needed.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-3-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/xsk: do not change XDP program when not necessary
Magnus Karlsson [Tue, 16 May 2023 10:31:00 +0000 (12:31 +0200)]
selftests/xsk: do not change XDP program when not necessary

Do not change the XDP program for the Tx thread when not needed. It
was erroneously compared to the XDP program for the Rx thread, which
is always going to be different, which meant that the code made
unnecessary switches to the same program it had before. This did not
affect functionality, just performance.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/r/20230516103109.3066-2-magnus.karlsson@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoMerge branch 'bpf: Move kernel test kfuncs into bpf_testmod'
Alexei Starovoitov [Wed, 17 May 2023 05:09:24 +0000 (22:09 -0700)]
Merge branch 'bpf: Move kernel test kfuncs into bpf_testmod'

Jiri Olsa says:

====================
hi,
I noticed several times in discussions that we should move test kfuncs
into kernel module, now perhaps even more pressing with all the kfunc
effort. This patchset moves all the test kfuncs into bpf_testmod.

I added bpf_testmod/bpf_testmod_kfunc.h header that is shared between
bpf_testmod kernel module and BPF programs.

v4 changes:
  - s390 supports long calls [1] now, so it can call now kfuncs from module [Ilya]
  - added acks [David]
  - cleanups for ptr_to_u64 function [David]
  - use relative path for bpf_testmod_kfunc.h include [Andrii]
  - new libbpf fix (patch 1) for gen_loader

v3 changes:
  - added acks [David]
  - added bpf_testmod.ko make dependency for bpf test progs [David]
  - better handling of __ksym and refcount_t in bpf_testmod_kfunc.h [David]
  - removed 'extern' from kfuncs declarations [David]
  - typo in header guard macro [David]
  - use only stdout in un/load_bpf_testmod

v2 changes:
  - add 74bc3a5acc82 into bpf-next/master CI, so the test would pass
    https://github.com/kernel-patches/vmtest/pull/192
  - remove extra externs [Artem]
  - using un/load_bpf_testmod in other tests
  - rebased

thanks,
jirka

[1] 1cf3bfc60f98 bpf: Support 64-bit pointers to kfuncs
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agobpf: Move kernel test kfuncs to bpf_testmod
Jiri Olsa [Mon, 15 May 2023 13:37:56 +0000 (15:37 +0200)]
bpf: Move kernel test kfuncs to bpf_testmod

Moving kernel test kfuncs into bpf_testmod kernel module, and adding
necessary init calls and BTF IDs records.

We need to keep following structs in kernel:
  struct prog_test_ref_kfunc
  struct prog_test_member (embedded in prog_test_ref_kfunc)

The reason is because they need to be marked as rcu safe (check test
prog mark_ref_as_untrusted_or_null) and such objects are being required
to be defined only in kernel at the moment (see rcu_safe_kptr check
in kernel).

We need to keep also dtor functions for both objects in kernel:
  bpf_kfunc_call_test_release
  bpf_kfunc_call_memb_release

We also keep the copy of these struct in bpf_testmod_kfunc.h, because
other test functions use them. This is unfortunate, but this is just
temporary solution until we are able to these structs them to bpf_testmod
completely.

As suggested by David adding bpf_testmod.ko make dependency for
bpf programs, so they are rebuilt if we change the bpf_testmod.ko
module.

Also adding missing __bpf_kfunc to bpf_kfunc_call_test4 functions.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: David Vernet <void@manifault.com>
Link: https://lore.kernel.org/r/20230515133756.1658301-11-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Remove extern from kfuncs declarations
Jiri Olsa [Mon, 15 May 2023 13:37:55 +0000 (15:37 +0200)]
selftests/bpf: Remove extern from kfuncs declarations

There's no need to keep the extern in kfuncs declarations.

Suggested-by: David Vernet <void@manifault.com>
Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-10-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Allow to use kfunc from testmod.ko in test_verifier
Jiri Olsa [Mon, 15 May 2023 13:37:54 +0000 (15:37 +0200)]
selftests/bpf: Allow to use kfunc from testmod.ko in test_verifier

Currently the test_verifier allows test to specify kfunc symbol
and search for it in the kernel BTF.

Adding the possibility to search for kfunc also in bpf_testmod
module when it's not found in kernel BTF.

To find bpf_testmod btf we need to get back SYS_ADMIN cap.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-9-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Load bpf_testmod for verifier test
Jiri Olsa [Mon, 15 May 2023 13:37:53 +0000 (15:37 +0200)]
selftests/bpf: Load bpf_testmod for verifier test

Loading bpf_testmod kernel module for verifier test. We will
move all the tests kfuncs into bpf_testmod in following change.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-8-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Use un/load_bpf_testmod functions in tests
Jiri Olsa [Mon, 15 May 2023 13:37:52 +0000 (15:37 +0200)]
selftests/bpf: Use un/load_bpf_testmod functions in tests

Now that we have un/load_bpf_testmod helpers in testing_helpers.h,
we can use it in other tests and save some lines.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-7-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Do not unload bpf_testmod in load_bpf_testmod
Jiri Olsa [Mon, 15 May 2023 13:37:51 +0000 (15:37 +0200)]
selftests/bpf: Do not unload bpf_testmod in load_bpf_testmod

Do not unload bpf_testmod in load_bpf_testmod, instead call
unload_bpf_testmod separatelly.

This way we will be able use un/load_bpf_testmod functions
in other tests that un/load bpf_testmod module.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-6-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Use only stdout in un/load_bpf_testmod functions
Jiri Olsa [Mon, 15 May 2023 13:37:50 +0000 (15:37 +0200)]
selftests/bpf: Use only stdout in un/load_bpf_testmod functions

We are about to use un/load_bpf_testmod functions in couple tests
and it's better  to print output to stdout,  so it's aligned with
tests ASSERT macros output, which use stdout as well.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-5-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Move test_progs helpers to testing_helpers object
Jiri Olsa [Mon, 15 May 2023 13:37:49 +0000 (15:37 +0200)]
selftests/bpf: Move test_progs helpers to testing_helpers object

Moving test_progs helpers to testing_helpers object so they can be
used from test_verifier in following changes.

Also adding missing ifndef header guard to testing_helpers.h header.

Using stderr instead of env.stderr because un/load_bpf_testmod helpers
will be used outside test_progs. Also at the point of calling them
in test_progs the std files are not hijacked yet and stderr is the
same as env.stderr.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-4-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Move kfunc exports to bpf_testmod/bpf_testmod_kfunc.h
Jiri Olsa [Mon, 15 May 2023 13:37:48 +0000 (15:37 +0200)]
selftests/bpf: Move kfunc exports to bpf_testmod/bpf_testmod_kfunc.h

Move all kfunc exports into separate bpf_testmod_kfunc.h header file
and include it in tests that need it.

We will move all test kfuncs into bpf_testmod in following change,
so it's convenient to have declarations in single place.

The bpf_testmod_kfunc.h is included by both bpf_testmod and bpf
programs that use test kfuncs.

As suggested by David, the bpf_testmod_kfunc.h includes vmlinux.h
and bpf/bpf_helpers.h for bpf programs build, so the declarations
have proper __ksym attribute and we can resolve all the structs.

Note in kfunc_call_test_subprog.c we can no longer use the sk_state
define from bpf_tcp_helpers.h (because it clashed with vmlinux.h)
and we need to address __sk_common.skc_state field directly.

Acked-by: David Vernet <void@manifault.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agolibbpf: Store zero fd to fd_array for loader kfunc relocation
Jiri Olsa [Mon, 15 May 2023 13:37:47 +0000 (15:37 +0200)]
libbpf: Store zero fd to fd_array for loader kfunc relocation

When moving some of the test kfuncs to bpf_testmod I hit an issue
when some of the kfuncs that object uses are in module and some
in vmlinux.

The problem is that both vmlinux and module kfuncs get allocated
btf_fd_idx index into fd_array, but we store to it the BTF fd value
only for module's kfunc, not vmlinux's one because (it's zero).

Then after the program is loaded we check if fd_array[btf_fd_idx] != 0
and close the fd.

When the object has kfuncs from both vmlinux and module, the fd from
fd_array[btf_fd_idx] from previous load will be stored in there for
vmlinux's kfunc, so we close unrelated fd (of the program we just
loaded in my case).

Fixing this by storing zero to fd_array[btf_fd_idx] for vmlinux
kfuncs, so the we won't close stale fd.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230515133756.1658301-2-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: Fix s390 sock_field test failure
Yonghong Song [Tue, 16 May 2023 21:49:45 +0000 (14:49 -0700)]
selftests/bpf: Fix s390 sock_field test failure

llvm patch [1] enabled cross-function optimization for func arguments
(ArgumentPromotion) at -O2 level. And this caused s390 sock_fields
test failure ([2]). The failure is gone right now as patch [1] was
reverted in [3]. But it is possible that patch [3] will be reverted
again and then the test failure in [2] will show up again. So it is
desirable to fix the failure regardless.

The following is an analysis why sock_field test fails with
llvm patch [1].

The main problem is in
  static __noinline bool sk_dst_port__load_word(struct bpf_sock *sk)
  {
        __u32 *word = (__u32 *)&sk->dst_port;
        return word[0] == bpf_htons(0xcafe);
  }
  static __noinline bool sk_dst_port__load_half(struct bpf_sock *sk)
  {
        __u16 *half = (__u16 *)&sk->dst_port;
        return half[0] == bpf_htons(0xcafe);
  }
  ...
  int read_sk_dst_port(struct __sk_buff *skb)
  {
...
        sk = skb->sk;
...
        if (!sk_dst_port__load_word(sk))
                RET_LOG();
        if (!sk_dst_port__load_half(sk))
                RET_LOG();
...
  }

Through some cross-function optimization by ArgumentPromotion
optimization, the compiler does:
  static __noinline bool sk_dst_port__load_word(__u32 word_val)
  {
        return word_val == bpf_htons(0xcafe);
  }
  static __noinline bool sk_dst_port__load_half(__u16 half_val)
  {
        return half_val == bpf_htons(0xcafe);
  }
  ...
  int read_sk_dst_port(struct __sk_buff *skb)
  {
        ...
        sk = skb->sk;
        ...
        __u32 *word = (__u32 *)&sk->dst_port;
        __u32 word_val = word[0];
        ...
        if (!sk_dst_port__load_word(word_val))
                RET_LOG();

        __u16 half_val = word_val >> 16;
        if (!sk_dst_port__load_half(half_val))
                RET_LOG();
        ...
  }

In current uapi bpf.h, we have
  struct bpf_sock {
...
        __be16 dst_port;        /* network byte order */
        __u16 :16;              /* zero padding */
...
  };
But the old kernel (e.g., 5.6) we have
  struct bpf_sock {
...
__u32 dst_port;         /* network byte order */
...
  };

So for backward compatability reason, 4-byte load of
dst_port is converted to 2-byte load internally.
Specifically, 'word_val = word[0]' is replaced by 2-byte load
by the verifier and this caused the trouble for later
sk_dst_port__load_half() where half_val becomes 0.

Typical usr program won't have such a code pattern tiggering
the above bug, so let us fix the test failure with source
code change. Adding an empty asm volatile statement seems
enough to prevent undesired transformation.

  [1] https://reviews.llvm.org/D148269
  [2] https://lore.kernel.org/bpf/e7f2c5e8-a50c-198d-8f95-388165f1e4fd@meta.com/
  [3] https://reviews.llvm.org/rG141be5c062ecf22bd287afffd310e8ac4711444a

Tested-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20230516214945.1013578-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoselftests/bpf: improve netcnt test robustness
Andrii Nakryiko [Mon, 15 May 2023 20:48:33 +0000 (13:48 -0700)]
selftests/bpf: improve netcnt test robustness

Change netcnt to demand at least 10K packets, as we frequently see some
stray packet arriving during the test in BPF CI. It seems more important
to make sure we haven't lost any packet than enforcing exact number of
packets.

Cc: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20230515204833.2832000-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
17 months agoRevert "net: Remove low_thresh in ip defrag"
Jakub Kicinski [Wed, 17 May 2023 03:41:12 +0000 (20:41 -0700)]
Revert "net: Remove low_thresh in ip defrag"

This reverts commit b2cbac9b9b28730e9e53be20b6cdf979d3b9f27e.

We have multiple reports of obvious breakage from this patch.

Reported-by: Ido Schimmel <idosch@idosch.org>
Link: https://lore.kernel.org/all/ZGIRWjNcfqI8yY8W@shredder/
Link: https://lore.kernel.org/all/CADJHv_sDK=0RrMA2FTZQV5fw7UQ+qY=HG21Wu5qb0V9vvx5w6A@mail.gmail.com/
Reported-by: syzbot+a5e719ac7c268e414c95@syzkaller.appspotmail.com
Reported-by: syzbot+a03fd670838d927d9cd8@syzkaller.appspotmail.com
Fixes: b2cbac9b9b28 ("net: Remove low_thresh in ip defrag")
Link: https://lore.kernel.org/r/20230517034112.1261835-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
17 months agoMerge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf...
Jakub Kicinski [Wed, 17 May 2023 02:50:05 +0000 (19:50 -0700)]
Merge tag 'for-netdev' of https://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2023-05-16

We've added 57 non-merge commits during the last 19 day(s) which contain
a total of 63 files changed, 3293 insertions(+), 690 deletions(-).

The main changes are:

1) Add precision propagation to verifier for subprogs and callbacks,
   from Andrii Nakryiko.

2) Improve BPF's {g,s}setsockopt() handling with wrong option lengths,
   from Stanislav Fomichev.

3) Utilize pahole v1.25 for the kernel's BTF generation to filter out
   inconsistent function prototypes, from Alan Maguire.

4) Various dyn-pointer verifier improvements to relax restrictions,
   from Daniel Rosenberg.

5) Add a new bpf_task_under_cgroup() kfunc for designated task,
   from Feng Zhou.

6) Unblock tests for arm64 BPF CI after ftrace supporting direct call,
   from Florent Revest.

7) Add XDP hint kfunc metadata for RX hash/timestamp for igc,
   from Jesper Dangaard Brouer.

8) Add several new dyn-pointer kfuncs to ease their usability,
   from Joanne Koong.

9) Add in-depth LRU internals description and dot function graph,
   from Joe Stringer.

10) Fix KCSAN report on bpf_lru_list when accessing node->ref,
    from Martin KaFai Lau.

11) Only dump unprivileged_bpf_disabled log warning upon write,
    from Kui-Feng Lee.

12) Extend test_progs to directly passing allow/denylist file,
    from Stephen Veiss.

13) Fix BPF trampoline memleak upon failure attaching to fentry,
    from Yafang Shao.

14) Fix emitting struct bpf_tcp_sock type in vmlinux BTF,
    from Yonghong Song.

* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (57 commits)
  bpf: Fix memleak due to fentry attach failure
  bpf: Remove bpf trampoline selector
  bpf, arm64: Support struct arguments in the BPF trampoline
  bpftool: JIT limited misreported as negative value on aarch64
  bpf: fix calculation of subseq_idx during precision backtracking
  bpf: Remove anonymous union in bpf_kfunc_call_arg_meta
  bpf: Document EFAULT changes for sockopt
  selftests/bpf: Correctly handle optlen > 4096
  selftests/bpf: Update EFAULT {g,s}etsockopt selftests
  bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen
  libbpf: fix offsetof() and container_of() to work with CO-RE
  bpf: Address KCSAN report on bpf_lru_list
  bpf: Add --skip_encoding_btf_inconsistent_proto, --btf_gen_optimized to pahole flags for v1.25
  selftests/bpf: Accept mem from dynptr in helper funcs
  bpf: verifier: Accept dynptr mem as mem in helpers
  selftests/bpf: Check overflow in optional buffer
  selftests/bpf: Test allowing NULL buffer in dynptr slice
  bpf: Allow NULL buffers in bpf_dynptr_slice(_rw)
  selftests/bpf: Add testcase for bpf_task_under_cgroup
  bpf: Add bpf_task_under_cgroup() kfunc
  ...
====================

Link: https://lore.kernel.org/r/20230515225603.27027-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
18 months agoMerge branch 'spdx-conversion-for-bonding-8390-and-i825xx-drivers'
Paolo Abeni [Tue, 16 May 2023 13:39:09 +0000 (15:39 +0200)]
Merge branch 'spdx-conversion-for-bonding-8390-and-i825xx-drivers'

Bagas Sanjaya says:

====================
SPDX conversion for bonding, 8390, and i825xx drivers

This series is SPDX conversion for bonding, 8390, and i825xx driver
subsystems. It is splitted from v2 of my SPDX conversion series in
response to Didi's GPL full name fixes [1] to make it easily
digestible.

The conversion in this series is divided by each subsystem and by
license type.

[1]: https://lore.kernel.org/linux-spdx/20230512100620.36807-1-bagasdotme@gmail.com/
====================

Link: https://lore.kernel.org/r/20230515060714.621952-1-bagasdotme@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: ethernet: i825xx: sun3_8256: Add SPDX license identifier
Bagas Sanjaya [Mon, 15 May 2023 06:07:15 +0000 (13:07 +0700)]
net: ethernet: i825xx: sun3_8256: Add SPDX license identifier

The boilerplate reads that sun3_8256 driver is an extension to Linux
kernel core, hence add SPDX license identifier for GPL 2.0.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Michael Hipp <hippm@informatik.uni-tuebingen.de>
Cc: Sam Creasey <sammy@sammy.net>
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: ethernet: i825xx: Replace unversioned GPL (GPL 1.0) notice with SPDX identifier
Bagas Sanjaya [Mon, 15 May 2023 06:07:14 +0000 (13:07 +0700)]
net: ethernet: i825xx: Replace unversioned GPL (GPL 1.0) notice with SPDX identifier

Replace unversioned GPL boilerplate notice with corresponding SPDX
license identifier, which is GPL 1.0+.

Cc: Donald Becker <becker@scyld.com>
Cc: Richard Hirst <richard@sleepie.demon.co.uk>
Cc: Sam Creasey <sammy@sammy.net>
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: ethernet: 8390: Replace GPL 2.0 boilerplate with SPDX identifier
Bagas Sanjaya [Mon, 15 May 2023 06:07:13 +0000 (13:07 +0700)]
net: ethernet: 8390: Replace GPL 2.0 boilerplate with SPDX identifier

The boilerplate refers to COPYING in the top-level directory of kernel
tree. Replace it with corresponding SPDX license identifier.

Cc: Donald Becker <becker@scyld.com>
Cc: Peter De Schrijver <p2@mind.be>
Cc: Topi Kanerva <topi@susanna.oulu.fi>
Cc: Alain Malek <Alain.Malek@cryogen.com>
Cc: Bruce Abbott <bhabbott@inhb.co.nz>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Richard Fontana <rfontana@redhat.com>
Acked-by: Greg Ungerer <gerg@linux-m68k.org>
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: ethernet: 8390: Convert unversioned GPL notice to SPDX license identifier
Bagas Sanjaya [Mon, 15 May 2023 06:07:12 +0000 (13:07 +0700)]
net: ethernet: 8390: Convert unversioned GPL notice to SPDX license identifier

Replace boilerplate notice for unversioned GPL to SPDX tag for GPL 1.0+.
For ne2k-pci.c, only add SPDX tag and keep the boilerplate instead,
since the boilerplate notes that it must be preserved.

Cc: David A. Hinds <dahinds@users.sourceforge.net>
Cc: Donald Becker <becker@scyld.com>
Cc: Alan Cox <alan@linux.intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Richard Fontana <rfontana@redhat.com>
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: bonding: Add SPDX identifier to remaining files
Bagas Sanjaya [Mon, 15 May 2023 06:07:11 +0000 (13:07 +0700)]
net: bonding: Add SPDX identifier to remaining files

Previous batches of SPDX conversion missed bond_main.c and bonding_priv.h
because these files doesn't mention intended GPL version. Add SPDX identifier
to these files, assuming GPL 1.0+.

Cc: Thomas Davis <tadavis@lbl.gov>
Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agonet: skbuff: update comment about pfmemalloc propagating
Yunsheng Lin [Mon, 15 May 2023 05:01:07 +0000 (13:01 +0800)]
net: skbuff: update comment about pfmemalloc propagating

__skb_fill_page_desc_noacc() is not doing any pfmemalloc
propagating, and yet it has a comment about that, commit
84ce071e38a6 ("net: introduce __skb_fill_page_desc_noacc")
may have accidentally moved it to __skb_fill_page_desc_noacc(),
so move it back to __skb_fill_page_desc() which is supposed
to be doing pfmemalloc propagating.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
CC: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/20230515050107.46397-1-linyunsheng@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
18 months agobpf: Fix memleak due to fentry attach failure
Yafang Shao [Mon, 15 May 2023 13:08:47 +0000 (13:08 +0000)]
bpf: Fix memleak due to fentry attach failure

If it fails to attach fentry, the allocated bpf trampoline image will be
left in the system. That can be verified by checking /proc/kallsyms.

This meamleak can be verified by a simple bpf program as follows:

  SEC("fentry/trap_init")
  int fentry_run()
  {
      return 0;
  }

It will fail to attach trap_init because this function is freed after
kernel init, and then we can find the trampoline image is left in the
system by checking /proc/kallsyms.

  $ tail /proc/kallsyms
  ffffffffc0613000 t bpf_trampoline_6442453466_1  [bpf]
  ffffffffc06c3000 t bpf_trampoline_6442453466_1  [bpf]

  $ bpftool btf dump file /sys/kernel/btf/vmlinux | grep "FUNC 'trap_init'"
  [2522] FUNC 'trap_init' type_id=119 linkage=static

  $ echo $((6442453466 & 0x7fffffff))
  2522

Note that there are two left bpf trampoline images, that is because the
libbpf will fallback to raw tracepoint if -EINVAL is returned.

Fixes: e21aa341785c ("bpf: Fix fexit trampoline.")
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <song@kernel.org>
Cc: Jiri Olsa <olsajiri@gmail.com>
Link: https://lore.kernel.org/bpf/20230515130849.57502-2-laoar.shao@gmail.com
18 months agobpf: Remove bpf trampoline selector
Yafang Shao [Mon, 15 May 2023 13:08:48 +0000 (13:08 +0000)]
bpf: Remove bpf trampoline selector

After commit e21aa341785c ("bpf: Fix fexit trampoline."), the selector is only
used to indicate how many times the bpf trampoline image are updated and been
displayed in the trampoline ksym name. After the trampoline is freed, the
selector will start from 0 again. So the selector is a useless value to the
user. We can remove it.

If the user want to check whether the bpf trampoline image has been updated
or not, the user can compare the address. Each time the trampoline image is
updated, the address will change consequently. Jiri also pointed out another
issue that perf is still using the old name "bpf_trampoline_%lu", so this
change can fix the issue in perf.

Fixes: e21aa341785c ("bpf: Fix fexit trampoline.")
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <song@kernel.org>
Cc: Jiri Olsa <olsajiri@gmail.com>
Link: https://lore.kernel.org/bpf/ZFvOOlrmHiY9AgXE@krava
Link: https://lore.kernel.org/bpf/20230515130849.57502-3-laoar.shao@gmail.com
18 months agobpf, arm64: Support struct arguments in the BPF trampoline
Florent Revest [Thu, 11 May 2023 14:05:07 +0000 (16:05 +0200)]
bpf, arm64: Support struct arguments in the BPF trampoline

This extends the BPF trampoline JIT to support attachment to functions
that take small structures (up to 128bit) as argument. This is trivially
achieved by saving/restoring a number of "argument registers" rather
than a number of arguments.

The AAPCS64 section 6.8.2 describes the parameter passing ABI.
"Composite types" (like C structs) below 16 bytes (as enforced by the
BPF verifier) are provided as part of the 8 argument registers as
explained in the section C.12.

Signed-off-by: Florent Revest <revest@chromium.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Xu Kuohai <xukuohai@huawei.com>
Link: https://lore.kernel.org/bpf/20230511140507.514888-1-revest@chromium.org
18 months agobpftool: JIT limited misreported as negative value on aarch64
Alan Maguire [Fri, 12 May 2023 11:31:34 +0000 (12:31 +0100)]
bpftool: JIT limited misreported as negative value on aarch64

On aarch64, "bpftool feature" reports an incorrect BPF JIT limit:

$ sudo /sbin/bpftool feature
Scanning system configuration...
bpf() syscall restricted to privileged users
JIT compiler is enabled
JIT compiler hardening is disabled
JIT compiler kallsyms exports are enabled for root
skipping kernel config, can't open file: No such file or directory
Global memory limit for JIT compiler for unprivileged users is -201326592 bytes

This is because /proc/sys/net/core/bpf_jit_limit reports

$ sudo cat /proc/sys/net/core/bpf_jit_limit
68169519595520

...and an int is assumed in read_procfs().  Change read_procfs()
to return a long to avoid negative value reporting.

Fixes: 7a4522bbef0c ("tools: bpftool: add probes for /proc/ eBPF parameters")
Reported-by: Nicky Veitch <nicky.veitch@oracle.com>
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20230512113134.58996-1-alan.maguire@oracle.com
18 months agobpf: fix calculation of subseq_idx during precision backtracking
Andrii Nakryiko [Mon, 15 May 2023 18:07:10 +0000 (11:07 -0700)]
bpf: fix calculation of subseq_idx during precision backtracking

Subsequent instruction index (subseq_idx) is an index of an instruction
that was verified/executed by verifier after the currently processed
instruction. It is maintained during precision backtracking processing
and is used to detect various subprog calling conditions.

This patch fixes the bug with incorrectly resetting subseq_idx to -1
when going from child state to parent state during backtracking. If we
don't maintain correct subseq_idx we can misidentify subprog calls
leading to precision tracking bugs.

One such case was triggered by test_global_funcs/global_func9 test where
global subprog call happened to be the very last instruction in parent
state, leading to subseq_idx==-1, triggering WARN_ONCE:

  [   36.045754] verifier backtracking bug
  [   36.045764] WARNING: CPU: 13 PID: 2073 at kernel/bpf/verifier.c:3503 __mark_chain_precision+0xcc6/0xde0
  [   36.046819] Modules linked in: aesni_intel(E) crypto_simd(E) cryptd(E) kvm_intel(E) kvm(E) irqbypass(E) i2c_piix4(E) serio_raw(E) i2c_core(E) crc32c_intel)
  [   36.048040] CPU: 13 PID: 2073 Comm: test_progs Tainted: G        W  OE      6.3.0-07976-g4d585f48ee6b-dirty #972
  [   36.048783] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
  [   36.049648] RIP: 0010:__mark_chain_precision+0xcc6/0xde0
  [   36.050038] Code: 3d 82 c6 05 bb 35 32 02 01 e8 66 21 ec ff 0f 0b b8 f2 ff ff ff e9 30 f5 ff ff 48 c7 c7 f3 61 3d 82 4c 89 0c 24 e8 4a 21 ec ff <0f> 0b 4c0

With the fix precision tracking across multiple states works correctly now:

mark_precise: frame0: last_idx 45 first_idx 38 subseq_idx -1
mark_precise: frame0: regs=r8 stack= before 44: (61) r7 = *(u32 *)(r10 -4)
mark_precise: frame0: regs=r8 stack= before 43: (85) call pc+41
mark_precise: frame0: regs=r8 stack= before 42: (07) r1 += -48
mark_precise: frame0: regs=r8 stack= before 41: (bf) r1 = r10
mark_precise: frame0: regs=r8 stack= before 40: (63) *(u32 *)(r10 -48) = r1
mark_precise: frame0: regs=r8 stack= before 39: (b4) w1 = 0
mark_precise: frame0: regs=r8 stack= before 38: (85) call pc+38
mark_precise: frame0: parent state regs=r8 stack=:  R0_w=scalar() R1_w=map_value(off=4,ks=4,vs=8,imm=0) R6=1 R7_w=scalar() R8_r=P0 R10=fpm
mark_precise: frame0: last_idx 36 first_idx 28 subseq_idx 38
mark_precise: frame0: regs=r8 stack= before 36: (18) r1 = 0xffff888104f2ed14
mark_precise: frame0: regs=r8 stack= before 35: (85) call pc+33
mark_precise: frame0: regs=r8 stack= before 33: (18) r1 = 0xffff888104f2ed10
mark_precise: frame0: regs=r8 stack= before 32: (85) call pc+36
mark_precise: frame0: regs=r8 stack= before 31: (07) r1 += -4
mark_precise: frame0: regs=r8 stack= before 30: (bf) r1 = r10
mark_precise: frame0: regs=r8 stack= before 29: (63) *(u32 *)(r10 -4) = r7
mark_precise: frame0: regs=r8 stack= before 28: (4c) w7 |= w0
mark_precise: frame0: parent state regs=r8 stack=:  R0_rw=scalar() R6=1 R7_rw=scalar() R8_rw=P0 R10=fp0 fp-48_r=mmmmmmmm
mark_precise: frame0: last_idx 27 first_idx 16 subseq_idx 28
mark_precise: frame0: regs=r8 stack= before 27: (85) call pc+31
mark_precise: frame0: regs=r8 stack= before 26: (b7) r1 = 0
mark_precise: frame0: regs=r8 stack= before 25: (b7) r8 = 0

Note how subseq_idx starts out as -1, then is preserved as 38 and then 28 as we
go up the parent state chain.

Reported-by: Alexei Starovoitov <ast@kernel.org>
Fixes: fde2a3882bd0 ("bpf: support precision propagation in the presence of subprogs")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230515180710.1535018-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
18 months agobpf: Remove anonymous union in bpf_kfunc_call_arg_meta
Dave Marchevsky [Wed, 10 May 2023 21:30:47 +0000 (14:30 -0700)]
bpf: Remove anonymous union in bpf_kfunc_call_arg_meta

For kfuncs like bpf_obj_drop and bpf_refcount_acquire - which take
user-defined types as input - the verifier needs to track the specific
type passed in when checking a particular kfunc call. This requires
tracking (btf, btf_id) tuple. In commit 7c50b1cb76ac
("bpf: Add bpf_refcount_acquire kfunc") I added an anonymous union with
inner structs named after the specific kfuncs tracking this information,
with the goal of making it more obvious which kfunc this data was being
tracked / expected to be tracked on behalf of.

In a recent series adding a new user of this tuple, Alexei mentioned
that he didn't like this union usage as it doesn't really help with
readability or bug-proofing ([0]). In an offline convo we agreed to
have the tuple be fields (arg_btf, arg_btf_id), with comments in
bpf_kfunc_call_arg_meta definition enumerating the uses of the fields by
kfunc-specific handling logic. Such a pattern is used by struct
bpf_reg_state without trouble.

Accordingly, this patch removes the anonymous union in favor of arg_btf
and arg_btf_id fields and comment enumerating their current uses. The
patch also removes struct btf_and_id, which was only being used by the
removed union's inner structs.

This is a mechanical change, existing linked_list and rbtree tests will
validate that correct (btf, btf_id) are being passed.

  [0]: https://lore.kernel.org/bpf/20230505021707.vlyiwy57vwxglbka@dhcp-172-26-102-232.dhcp.thefacebook.com

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/r/20230510213047.1633612-1-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
18 months agonfc: llcp: fix possible use of uninitialized variable in nfc_llcp_send_connect()
Krzysztof Kozlowski [Sat, 13 May 2023 11:52:04 +0000 (13:52 +0200)]
nfc: llcp: fix possible use of uninitialized variable in nfc_llcp_send_connect()

If sock->service_name is NULL, the local variable
service_name_tlv_length will not be assigned by nfc_llcp_build_tlv(),
later leading to using value frmo the stack.  Smatch warning:

  net/nfc/llcp_commands.c:442 nfc_llcp_send_connect() error: uninitialized symbol 'service_name_tlv_length'.

Fixes: de9e5aeb4f40 ("NFC: llcp: Fix usage of llcp_add_tlv()")
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: mcs: Remove unneeded semicolon
Yang Li [Mon, 15 May 2023 08:56:45 +0000 (16:56 +0800)]
octeontx2-pf: mcs: Remove unneeded semicolon

./drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c:242:2-3: Unneeded semicolon
./drivers/net/ethernet/marvell/octeontx2/nic/cn10k_macsec.c:476:2-3: Unneeded semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=4947
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: ethernet: microchip: vcap: Remove extra semicolon
Anup Sharma [Sat, 13 May 2023 11:06:54 +0000 (16:36 +0530)]
net: ethernet: microchip: vcap: Remove extra semicolon

Remove the extra semicolon at end. Issue identified using
semicolon.cocci Coccinelle semantic patch.

drivers/net/ethernet/microchip/vcap/vcap_api.c:1124:3-4: Unneeded semicolon
drivers/net/ethernet/microchip/vcap/vcap_api.c:1165:3-4: Unneeded semicolon
drivers/net/ethernet/microchip/vcap/vcap_api.c:1239:3-4: Unneeded semicolon
drivers/net/ethernet/microchip/vcap/vcap_api.c:1287:3-4: Unneeded semicolon

Signed-off-by: Anup Sharma <anupnewsmail@gmail.com>
Changes:
V1 -> V2: Target tree included in the subject line.

Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoMerge branch 'octeontx2-pf-HTB'
David S. Miller [Mon, 15 May 2023 08:31:08 +0000 (09:31 +0100)]
Merge branch 'octeontx2-pf-HTB'

Hariprasad Kelam says:

====================
octeontx2-pf: HTB offload support

octeontx2 silicon and CN10K transmit interface consists of five
transmit levels starting from MDQ, TL4 to TL1. Once packets are
submitted to MDQ, hardware picks all active MDQs using strict
priority, and MDQs having the same priority level are chosen using
round robin. Each packet will traverse MDQ, TL4 to TL1 levels.
Each level contains an array of queues to support scheduling and
shaping.

As HTB supports classful queuing mechanism by supporting rate and
ceil and allow the user to control the absolute bandwidth to
particular classes of traffic the same can be achieved by
configuring shapers and schedulers on different transmit levels.

This series of patches adds support for HTB offload,

Patch1: Allow strict priority parameter in HTB offload mode.

Patch2: Rename existing total tx queues for better readability

Patch3: defines APIs such that the driver can dynamically initialize/
        deinitialize the send queues.

Patch4: Refactors transmit alloc/free calls as preparation for QOS
        offload code.

Patch5: moves rate limiting logic to common header which will be used
        by qos offload code.

Patch6: Adds actual HTB offload support.

Patch7: exposes qos send queue stats over ethtool.

Patch8: Add documentation about htb offload flow in driver
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agodocs: octeontx2: Add Documentation for QOS
Hariprasad Kelam [Sat, 13 May 2023 08:51:43 +0000 (14:21 +0530)]
docs: octeontx2: Add Documentation for QOS

Add QOS example configuration along with tc-htb commands

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: ethtool expose qos stats
Hariprasad Kelam [Sat, 13 May 2023 08:51:42 +0000 (14:21 +0530)]
octeontx2-pf: ethtool expose qos stats

This patch extends ethtool stats support for QoS send queues as well.
upon the number of transmit channels change request, Ensures the real
number of transmit queues are equal to active QoS send queues plus
configured transmit queues.

    ethtool -S eth0
    txq_qos0: bytes: 3021391800
    txq_qos0: frames: 1998275
    txq_qos1: bytes: 4619766312
    txq_qos1: frames: 3055401
    ...
    ...

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: Add support for HTB offload
Naveen Mamindlapalli [Sat, 13 May 2023 08:51:41 +0000 (14:21 +0530)]
octeontx2-pf: Add support for HTB offload

This patch registers callbacks to support HTB offload.

Below are features supported,

- supports traffic shaping on the given class by honoring rate and ceil
configuration.

- supports traffic scheduling,  which prioritizes different types of
traffic based on strict priority values.

- supports the creation of leaf to inner classes such that parent node
rate limits apply to all child nodes.

Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: Prepare for QOS offload
Hariprasad Kelam [Sat, 13 May 2023 08:51:40 +0000 (14:21 +0530)]
octeontx2-pf: Prepare for QOS offload

This patch moves rate limiting definitions to a common header file and
adds csr definitions required for QOS code.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: Refactor schedular queue alloc/free calls
Hariprasad Kelam [Sat, 13 May 2023 08:51:39 +0000 (14:21 +0530)]
octeontx2-pf: Refactor schedular queue alloc/free calls

1. Upon txschq free request, the transmit schedular config in hardware
is not getting reset. This patch adds necessary changes to do the same.

2. Current implementation calls txschq alloc during interface
initialization and in response handler updates the default txschq array.
This creates a problem for htb offload where txsch alloc will be called
for every tc class. This patch addresses the issue by reading txschq
response in mbox caller function instead in the response handler.

3. Current otx2_txschq_stop routine tries to free all txschq nodes
allocated to the interface. This creates a problem for htb offload.
This patch introduces the otx2_txschq_free_one to free txschq in a
given level.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: qos send queues management
Subbaraya Sundeep [Sat, 13 May 2023 08:51:38 +0000 (14:21 +0530)]
octeontx2-pf: qos send queues management

Current implementation is such that the number of Send queues (SQs)
are decided on the device probe which is equal to the number of online
cpus. These SQs are allocated and deallocated in interface open and c
lose calls respectively.

This patch defines new APIs for initializing and deinitializing Send
queues dynamically and allocates more number of transmit queues for
QOS feature.

Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoocteontx2-pf: Rename tot_tx_queues to non_qos_queues
Hariprasad Kelam [Sat, 13 May 2023 08:51:37 +0000 (14:21 +0530)]
octeontx2-pf: Rename tot_tx_queues to non_qos_queues

current implementation is such that tot_tx_queues contains both
xdp queues and normal tx queues. which will be allocated in interface
open calls and deallocated on interface down calls respectively.

With addition of QOS, where send quees are allocated/deallacated upon
user request Qos send queues won't be part of tot_tx_queues. So this
patch renames tot_tx_queues to non_qos_queues.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agosch_htb: Allow HTB priority parameter in offload mode
Naveen Mamindlapalli [Sat, 13 May 2023 08:51:36 +0000 (14:21 +0530)]
sch_htb: Allow HTB priority parameter in offload mode

The current implementation of HTB offload returns the EINVAL error
for unsupported parameters like prio and quantum. This patch removes
the error returning checks for 'prio' parameter and populates its
value to tc_htb_qopt_offload structure such that driver can use the
same.

Add prio parameter check in mlx5 driver, as mlx5 devices are not capable
of supporting the prio parameter when htb offload is used. Report error
if prio parameter is set to a non-default value.

Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com>
Co-developed-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com>
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: Remove low_thresh in ip defrag
Angus Chen [Fri, 12 May 2023 01:01:52 +0000 (09:01 +0800)]
net: Remove low_thresh in ip defrag

As low_thresh has no work in fragment reassembles,del it.
And Mark it deprecated in sysctl Document.

Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoMerge tag 'wireless-next-2023-05-12' of git://git.kernel.org/pub/scm/linux/kernel...
David S. Miller [Mon, 15 May 2023 07:37:17 +0000 (08:37 +0100)]
Merge tag 'wireless-next-2023-05-12' of git://git./linux/kernel/git/wireless/wireless-next

Kalle valo says:

====================
wireless-next patches for v6.5

The first pull request for v6.5 and only driver changes this time.
rtl8xxxu has been making lots of progress lately and now has AP mode
support.

Major changes:

rtl8xxxu

* AP mode support, initially only for rtl8188f

rtw89

* provide RSSI, EVN and SNR statistics via debugfs

* support U-NII-4 channels on 5 GHz band
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoMerge branch 'bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen'
Martin KaFai Lau [Sat, 13 May 2023 23:20:16 +0000 (16:20 -0700)]
Merge branch 'bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen'

Stanislav Fomichev says:

====================
optval larger than PAGE_SIZE leads to EFAULT if the BPF program
isn't careful enough. This is often overlooked and might break
completely unrelated socket options. Instead of EFAULT,
let's ignore BPF program buffer changes. See the first patch for
more info.

In addition, clearly document this corner case and reset optlen
in our selftests (in case somebody copy-pastes from them).

v6:
- no changes; resending due to screwing up v5 series with the unrelated
  patch

v5:
- goto in the selftest (Martin)
- set IP_TOS to zero to avoid endianness complications (Martin)

v4:
- ignore retval as well when optlen > PAGE_SIZE (Martin)

v3:
- don't hard-code PAGE_SIZE (Martin)
- reset orig_optlen in getsockopt when kernel part succeeds (Martin)
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
18 months agobpf: Document EFAULT changes for sockopt
Stanislav Fomichev [Thu, 11 May 2023 17:04:56 +0000 (10:04 -0700)]
bpf: Document EFAULT changes for sockopt

And add examples for how to correctly handle large optlens.
This is less relevant now when we don't EFAULT anymore, but
that's still the correct thing to do.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20230511170456.1759459-5-sdf@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
18 months agoselftests/bpf: Correctly handle optlen > 4096
Stanislav Fomichev [Thu, 11 May 2023 17:04:55 +0000 (10:04 -0700)]
selftests/bpf: Correctly handle optlen > 4096

Even though it's not relevant in selftests, the people
might still copy-paste from them. So let's take care
of optlen > 4096 cases explicitly.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20230511170456.1759459-4-sdf@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
18 months agoselftests/bpf: Update EFAULT {g,s}etsockopt selftests
Stanislav Fomichev [Thu, 11 May 2023 17:04:54 +0000 (10:04 -0700)]
selftests/bpf: Update EFAULT {g,s}etsockopt selftests

Instead of assuming EFAULT, let's assume the BPF program's
output is ignored.

Remove "getsockopt: deny arbitrary ctx->retval" because it
was actually testing optlen. We have separate set of tests
for retval.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20230511170456.1759459-3-sdf@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
18 months agobpf: Don't EFAULT for {g,s}setsockopt with wrong optlen
Stanislav Fomichev [Thu, 11 May 2023 17:04:53 +0000 (10:04 -0700)]
bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen

With the way the hooks implemented right now, we have a special
condition: optval larger than PAGE_SIZE will expose only first 4k into
BPF; any modifications to the optval are ignored. If the BPF program
doesn't handle this condition by resetting optlen to 0,
the userspace will get EFAULT.

The intention of the EFAULT was to make it apparent to the
developers that the program is doing something wrong.
However, this inadvertently might affect production workloads
with the BPF programs that are not too careful (i.e., returning EFAULT
for perfectly valid setsockopt/getsockopt calls).

Let's try to minimize the chance of BPF program screwing up userspace
by ignoring the output of those BPF programs (instead of returning
EFAULT to the userspace). pr_info_once those cases to
the dmesg to help with figuring out what's going wrong.

Fixes: 0d01da6afc54 ("bpf: implement getsockopt and setsockopt hooks")
Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/r/20230511170456.1759459-2-sdf@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
18 months agosfc: fix use-after-free in efx_tc_flower_record_encap_match()
Edward Cree [Fri, 12 May 2023 15:35:58 +0000 (16:35 +0100)]
sfc: fix use-after-free in efx_tc_flower_record_encap_match()

When writing error messages to extack for pseudo collisions, we can't
 use encap->type as encap has already been freed.  Fortunately the
 same value is stored in local variable em_type, so use that instead.

Fixes: 3c9561c0a5b9 ("sfc: support TC decap rules matching on enc_ip_tos")
Reported-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: phylink: constify fwnode arguments
Russell King (Oracle) [Fri, 12 May 2023 16:58:37 +0000 (17:58 +0100)]
net: phylink: constify fwnode arguments

Both phylink_create() and phylink_fwnode_phy_connect() do not modify
the fwnode argument that they are passed, so lets constify these.

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: fec: using the standard return codes when xdp xmit errors
Shenwei Wang [Fri, 12 May 2023 13:20:10 +0000 (08:20 -0500)]
net: fec: using the standard return codes when xdp xmit errors

This patch standardizes the inconsistent return values for unsuccessful
XDP transmits by using standardized error codes (-EBUSY or -ENOMEM).

Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: macb: Shorten max_tx_len to 4KiB - 56 on mpfs
Daire McNamara [Fri, 12 May 2023 12:20:32 +0000 (13:20 +0100)]
net: macb: Shorten max_tx_len to 4KiB - 56 on mpfs

On mpfs, with SRAM configured for 4 queues, setting max_tx_len
to GEM_TX_MAX_LEN=0x3f0 results multiple AMBA errors.
Setting max_tx_len to (4KiB - 56) removes those errors.

The details are described in erratum 1686 by Cadence

The max jumbo frame size is also reduced for mpfs to (4KiB - 56).

Signed-off-by: Daire McNamara <daire.mcnamara@microchip.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoping: Convert hlist_nulls to plain hlist.
Kuniyuki Iwashima [Wed, 10 May 2023 21:54:43 +0000 (14:54 -0700)]
ping: Convert hlist_nulls to plain hlist.

Since introduced in commit c319b4d76b9e ("net: ipv4: add IPPROTO_ICMP
socket kind"), ping socket does not use SLAB_TYPESAFE_BY_RCU nor check
nulls marker in loops.

Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoMerge branch 'skb_frag_fill_page_desc'
David S. Miller [Sat, 13 May 2023 18:47:56 +0000 (19:47 +0100)]
Merge branch 'skb_frag_fill_page_desc'

Yunsheng Lin says:

====================
net: introduce skb_frag_fill_page_desc()

Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.
It does not make much sense to calling __skb_frag_set_page()
without calling skb_frag_off_set(), as the offset may depend
on whether the page is head page or tail page, so add
skb_frag_fill_page_desc() to fill the page desc for a skb
frag.

In the future, we can make sure the page in the frag is
head page of compound page or a base page, if not, we
may warn about that and convert the tail page to head
page and update the offset accordingly, if we see a warning
about that, we also fix the caller to fill the head page
in the frag. when the fixing is done, we may remove the
warning and converting.

In this way, we can remove the compound_head() or use
page_ref_*() like the below case:
https://elixir.bootlin.com/linux/latest/source/net/core/page_pool.c#L881
https://elixir.bootlin.com/linux/latest/source/include/linux/skbuff.h#L3383

It may also convert net stack to use the folio easier.

V1: repost with all the ack/review tags included.
RFC: remove a local variable as pointed out by Simon.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: remove __skb_frag_set_page()
Yunsheng Lin [Thu, 11 May 2023 01:12:13 +0000 (09:12 +0800)]
net: remove __skb_frag_set_page()

The remaining users calling __skb_frag_set_page() with
page being NULL seems to be doing defensive programming,
as shinfo->nr_frags is already decremented, so remove
them.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: introduce and use skb_frag_fill_page_desc()
Yunsheng Lin [Thu, 11 May 2023 01:12:12 +0000 (09:12 +0800)]
net: introduce and use skb_frag_fill_page_desc()

Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.

Introduce skb_frag_fill_page_desc() to do that.

net/bpf/test_run.c does not call skb_frag_off_set() to
set the offset, "copy_from_user(page_address(page), ...)"
and 'shinfo' being part of the 'data' kzalloced in
bpf_test_init() suggest that it is assuming offset to be
initialized as zero, so call skb_frag_fill_page_desc()
with offset being zero for this case.

Also, skb_frag_set_page() is not used anymore, so remove
it.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoselftests: net: vxlan: Add tests for vxlan nolocalbypass option.
Vladimir Nikishkin [Fri, 12 May 2023 03:40:34 +0000 (11:40 +0800)]
selftests: net: vxlan: Add tests for vxlan nolocalbypass option.

Add test to make sure that the localbypass option is on by default.

Add test to change vxlan localbypass to nolocalbypass and check
that packets are delivered to userspace.

Signed-off-by: Vladimir Nikishkin <vladimir@nikishkin.pw>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: vxlan: Add nolocalbypass option to vxlan.
Vladimir Nikishkin [Fri, 12 May 2023 03:40:33 +0000 (11:40 +0800)]
net: vxlan: Add nolocalbypass option to vxlan.

If a packet needs to be encapsulated towards a local destination IP, the
packet will undergo a "local bypass" and be injected into the Rx path as
if it was received by the target VXLAN device without undergoing
encapsulation. If such a device does not exist, the packet will be
dropped.

There are scenarios where we do not want to perform such a bypass, but
instead want the packet to be encapsulated and locally received by a
user space program for post-processing.

To that end, add a new VXLAN device attribute that controls whether a
"local bypass" is performed or not. Default to performing a bypass to
maintain existing behavior.

Signed-off-by: Vladimir Nikishkin <vladimir@nikishkin.pw>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agoMerge branch 'broadcom-phy-wol'
David S. Miller [Sat, 13 May 2023 15:56:29 +0000 (16:56 +0100)]
Merge branch 'broadcom-phy-wol'

Florian Fainelli says:

====================
Support for Wake-on-LAN for Broadcom PHYs

This patch series adds support for Wake-on-LAN to the Broadcom PHY
driver. Specifically the BCM54210E/B50212E are capable of supporting
Wake-on-LAN using an external pin typically wired up to a system's GPIO.

These PHY operate a programmable Ethernet MAC destination address
comparator which will fire up an interrupt whenever a match is received.
Because of that, it was necessary to introduce patch #1 which allows the
PHY driver's ->suspend() routine to be called unconditionally. This is
necessary in our case because we need a hook point into the device
suspend/resume flow to enable the wake-up interrupt as late as possible.

Patch #2 adds support for the Broadcom PHY library and driver for
Wake-on-LAN proper with the WAKE_UCAST, WAKE_MCAST, WAKE_BCAST,
WAKE_MAGIC and WAKE_MAGICSECURE. Note that WAKE_FILTER is supportable,
however this will require further discussions and be submitted as a RFC
series later on.

Patch #3 updates the GENET driver to defer to the PHY for Wake-on-LAN if
the PHY supports it, thus allowing the MAC to be powered down to
conserve power.

Changes in v3:

- collected Reviewed-by tags
- explicitly use return 0 in bcm54xx_phy_probe() (Paolo)

Changes in v2:

- introduce PHY_ALWAYS_CALL_SUSPEND and only have the Broadcom PHY
  driver set this flag to minimize changes to the suspend flow to only
  drivers that need it

- corrected possibly uninitialized variable in bcm54xx_set_wakeup_irq
  (Simon)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: bcmgenet: Add support for PHY-based Wake-on-LAN
Florian Fainelli [Thu, 11 May 2023 17:21:10 +0000 (10:21 -0700)]
net: bcmgenet: Add support for PHY-based Wake-on-LAN

If available, interrogate the PHY to find out whether we can use it for
Wake-on-LAN. This can be a more power efficient way of implementing
that feature, especially when the MAC is powered off in low power
states.

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: phy: broadcom: Add support for Wake-on-LAN
Florian Fainelli [Thu, 11 May 2023 17:21:09 +0000 (10:21 -0700)]
net: phy: broadcom: Add support for Wake-on-LAN

Add support for WAKE_UCAST, WAKE_MCAST, WAKE_BCAST, WAKE_MAGIC and
WAKE_MAGICSECURE. This is only supported with the BCM54210E and
compatible Ethernet PHYs. Using the in-band interrupt or an out of band
GPIO interrupts are supported.

Broadcom PHYs will generate a Wake-on-LAN level low interrupt on LED4 as
soon as one of the supported patterns is being matched. That includes
generating such an interrupt even if the PHY is operated during normal
modes. If WAKE_UCAST is selected, this could lead to the LED4 interrupt
firing up for every packet being received which is absolutely
undesirable from a performance point of view.

Because the Wake-on-LAN configuration can be set long before the system
is actually put to sleep, we cannot have an interrupt service routine to
clear on read the interrupt status register and ensure that new packet
matches will be detected.

It is desirable to enable the Wake-on-LAN interrupt as late as possible
during the system suspend process such that we limit the number of
interrupts to be handled by the system, but also conversely feed into
the Linux's system suspend way of dealing with interrupts in and around
the points of no return.

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
18 months agonet: phy: Allow drivers to always call into ->suspend()
Florian Fainelli [Thu, 11 May 2023 17:21:08 +0000 (10:21 -0700)]
net: phy: Allow drivers to always call into ->suspend()

A few PHY drivers are currently attempting to not suspend the PHY when
Wake-on-LAN is enabled, however that code is not currently executing at
all due to an early check in phy_suspend().

This prevents PHY drivers from making an appropriate decisions and put
the hardware into a low power state if desired.

In order to allow the PHY drivers to opt into getting their ->suspend
routine to be called, add a PHY_ALWAYS_CALL_SUSPEND bit which can be
set. A boolean that tracks whether the PHY or the attached MAC has
Wake-on-LAN enabled is also provided for convenience.

If phydev::wol_enabled then the PHY shall not prevent its own
Wake-on-LAN detection logic from working and shall not prevent the
Ethernet MAC from receiving packets for matching.

Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>