platform/kernel/linux-starfive.git
2 years agobpf: Fix usage of trace RCU in local storage.
KP Singh [Mon, 18 Apr 2022 15:51:58 +0000 (15:51 +0000)]
bpf: Fix usage of trace RCU in local storage.

bpf_{sk,task,inode}_storage_free() do not need to use
call_rcu_tasks_trace as no BPF program should be accessing the owner
as it's being destroyed. The only other reader at this point is
bpf_local_storage_map_free() which uses normal RCU.

The only path that needs trace RCU are:

* bpf_local_storage_{delete,update} helpers
* map_{delete,update}_elem() syscalls

Fixes: 0fe4b381a59e ("bpf: Allow bpf_local_storage to be used by sleepable programs")
Signed-off-by: KP Singh <kpsingh@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220418155158.2865678-1-kpsingh@kernel.org
2 years agoMerge branch 'Ensure type tags are always ordered first in BTF'
Alexei Starovoitov [Tue, 19 Apr 2022 21:02:49 +0000 (14:02 -0700)]
Merge branch 'Ensure type tags are always ordered first in BTF'

Kumar Kartikeya Dwivedi says:

====================

When iterating over modifiers, ensure that type tags can only occur at head of
the chain, and don't occur later, such that checking for them once in the start
tells us there are no more type tags in later modifiers. Clang already ensures
to emit such BTF, but user can craft their own BTF which violates such
assumptions if relied upon in the kernel.

Changelog:
----------
v2 -> v3
v2: https://lore.kernel.org/bpf/20220418224719.1604889-1-memxor@gmail.com

 * Address nit from Yonghong, add Acked-by

v1 -> v2
v1: https://lore.kernel.org/bpf/20220406004121.282699-1-memxor@gmail.com

 * Fix for bug pointed out by Yonghong
 * Update selftests to include Yonghong's example
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 years agoselftests/bpf: Add tests for type tag order validation
Kumar Kartikeya Dwivedi [Tue, 19 Apr 2022 16:46:08 +0000 (22:16 +0530)]
selftests/bpf: Add tests for type tag order validation

Add a few test cases that ensure we catch cases of badly ordered type
tags in modifier chains.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220419164608.1990559-3-memxor@gmail.com
2 years agobpf: Ensure type tags precede modifiers in BTF
Kumar Kartikeya Dwivedi [Tue, 19 Apr 2022 16:46:07 +0000 (22:16 +0530)]
bpf: Ensure type tags precede modifiers in BTF

It is guaranteed that for modifiers, clang always places type tags
before other modifiers, and then the base type. We would like to rely on
this guarantee inside the kernel to make it simple to parse type tags
from BTF.

However, a user would be allowed to construct a BTF without such
guarantees. Hence, add a pass to check that in modifier chains, type
tags only occur at the head of the chain, and then don't occur later in
the chain.

If we see a type tag, we can have one or more type tags preceding other
modifiers that then never have another type tag. If we see other
modifiers, all modifiers following them should never be a type tag.

Instead of having to walk chains we verified previously, we can remember
the last good modifier type ID which headed a good chain. At that point,
we must have verified all other chains headed by type IDs less than it.
This makes the verification process less costly, and it becomes a simple
O(n) pass.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20220419164608.1990559-2-memxor@gmail.com
2 years agoselftests/bpf: Use non-autoloaded programs in few tests
Andrii Nakryiko [Tue, 19 Apr 2022 00:24:51 +0000 (17:24 -0700)]
selftests/bpf: Use non-autoloaded programs in few tests

Take advantage of new libbpf feature for declarative non-autoloaded BPF
program SEC() definitions in few test that test single program at a time
out of many available programs within the single BPF object.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220419002452.632125-2-andrii@kernel.org
2 years agolibbpf: Support opting out from autoloading BPF programs declaratively
Andrii Nakryiko [Tue, 19 Apr 2022 00:24:50 +0000 (17:24 -0700)]
libbpf: Support opting out from autoloading BPF programs declaratively

Establish SEC("?abc") naming convention (i.e., adding question mark in
front of otherwise normal section name) that allows to set corresponding
program's autoload property to false. This is effectively just
a declarative way to do bpf_program__set_autoload(prog, false).

Having a way to do this declaratively in BPF code itself is useful and
convenient for various scenarios. E.g., for testing, when BPF object
consists of multiple independent BPF programs that each needs to be
tested separately. Opting out all of them by default and then setting
autoload to true for just one of them at a time simplifies testing code
(see next patch for few conversions in BPF selftests taking advantage of
this new feature).

Another real-world use case is in libbpf-tools for cases when different
BPF programs have to be picked depending on particulars of the host
kernel due to various incompatible changes (like kernel function renames
or signature change, or to pick kprobe vs fentry depending on
corresponding kernel support for the latter). Marking all the different
BPF program candidates as non-autoloaded declaratively makes this more
obvious in BPF source code and allows simpler code in user-space code.

When BPF program marked as SEC("?abc") it is otherwise treated just like
SEC("abc") and bpf_program__section_name() reported will be "abc".

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220419002452.632125-1-andrii@kernel.org
2 years agoselftests/bpf: Workaround a verifier issue for test exhandler
Yonghong Song [Tue, 19 Apr 2022 05:09:00 +0000 (22:09 -0700)]
selftests/bpf: Workaround a verifier issue for test exhandler

The llvm patch [1] enabled opaque pointer which caused selftest
'exhandler' failure.
  ...
  ; work = task->task_works;
  7: (79) r1 = *(u64 *)(r6 +2120)       ; R1_w=ptr_callback_head(off=0,imm=0) R6_w=ptr_task_struct(off=0,imm=0)
  ; func = work->func;
  8: (79) r2 = *(u64 *)(r1 +8)          ; R1_w=ptr_callback_head(off=0,imm=0) R2_w=scalar()
  ; if (!work && !func)
  9: (4f) r1 |= r2
  math between ptr_ pointer and register with unbounded min value is not allowed

  below is insn 10 and 11
  10: (55) if r1 != 0 goto +5
  11: (18) r1 = 0 ll
  ...

In llvm, the code generation of 'r1 |= r2' happened in codegen
selectiondag phase due to difference of opaque pointer vs. non-opaque pointer.
Without [1], the related code looks like:
  r2 = *(u64 *)(r6 + 2120)
  r1 = *(u64 *)(r2 + 8)
  if r2 != 0 goto +6 <LBB0_4>
  if r1 != 0 goto +5 <LBB0_4>
  r1 = 0 ll
  ...

I haven't found a good way in llvm to fix this issue. So let us workaround the
problem first so bpf CI won't be blocked.

  [1] https://reviews.llvm.org/D123300

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220419050900.3136024-1-yhs@fb.com
2 years agoselftests/bpf: Limit unroll_count for pyperf600 test
Yonghong Song [Tue, 19 Apr 2022 04:32:30 +0000 (21:32 -0700)]
selftests/bpf: Limit unroll_count for pyperf600 test

LLVM commit [1] changed loop pragma behavior such that
full loop unroll is always honored with user pragma.
Previously, unroll count also depends on the unrolled
code size. For pyperf600, without [1], the loop unroll
count is 150. With [1], the loop unroll count is 600.

The unroll count of 600 caused the program size close to
298k and this caused the following code is generated:
         0:       7b 1a 00 ff 00 00 00 00 *(u64 *)(r10 - 256) = r1
  ;       uint64_t pid_tgid = bpf_get_current_pid_tgid();
         1:       85 00 00 00 0e 00 00 00 call 14
         2:       bf 06 00 00 00 00 00 00 r6 = r0
  ;       pid_t pid = (pid_t)(pid_tgid >> 32);
         3:       bf 61 00 00 00 00 00 00 r1 = r6
         4:       77 01 00 00 20 00 00 00 r1 >>= 32
         5:       63 1a fc ff 00 00 00 00 *(u32 *)(r10 - 4) = r1
         6:       bf a2 00 00 00 00 00 00 r2 = r10
         7:       07 02 00 00 fc ff ff ff r2 += -4
  ;       PidData* pidData = bpf_map_lookup_elem(&pidmap, &pid);
         8:       18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
        10:       85 00 00 00 01 00 00 00 call 1
        11:       bf 08 00 00 00 00 00 00 r8 = r0
  ;       if (!pidData)
        12:       15 08 15 e8 00 00 00 00 if r8 == 0 goto -6123 <LBB0_27588+0xffffffffffdae100>

Note that insn 12 has a branch offset -6123 which is clearly illegal
and will be rejected by the verifier. The negative offset is due to
the branch range is greater than INT16_MAX.

This patch changed the unroll count to be 150 to avoid above
branch target insn out-of-range issue. Also the llvm is enhanced ([2])
to assert if the branch target insn is out of INT16 range.

  [1] https://reviews.llvm.org/D119148
  [2] https://reviews.llvm.org/D123877

Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220419043230.2928530-1-yhs@fb.com
2 years agobpf: Move rcu lock management out of BPF_PROG_RUN routines
Stanislav Fomichev [Thu, 14 Apr 2022 16:12:33 +0000 (09:12 -0700)]
bpf: Move rcu lock management out of BPF_PROG_RUN routines

Commit 7d08c2c91171 ("bpf: Refactor BPF_PROG_RUN_ARRAY family of macros
into functions") switched a bunch of BPF_PROG_RUN macros to inline
routines. This changed the semantic a bit. Due to arguments expansion
of macros, it used to be:

rcu_read_lock();
array = rcu_dereference(cgrp->bpf.effective[atype]);
...

Now, with with inline routines, we have:
array_rcu = rcu_dereference(cgrp->bpf.effective[atype]);
/* array_rcu can be kfree'd here */
rcu_read_lock();
array = rcu_dereference(array_rcu);

I'm assuming in practice rcu subsystem isn't fast enough to trigger
this but let's use rcu API properly.

Also, rename to lower caps to not confuse with macros. Additionally,
drop and expand BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY.

See [1] for more context.

  [1] https://lore.kernel.org/bpf/CAKH8qBs60fOinFdxiiQikK_q0EcVxGvNTQoWvHLEUGbgcj1UYg@mail.gmail.com/T/#u

v2
- keep rcu locks inside by passing cgroup_bpf

Fixes: 7d08c2c91171 ("bpf: Refactor BPF_PROG_RUN_ARRAY family of macros into functions")
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20220414161233.170780-1-sdf@google.com
2 years agoselftests/bpf: Refactor prog_tests logging and test execution
Mykola Lysenko [Mon, 18 Apr 2022 22:25:07 +0000 (15:25 -0700)]
selftests/bpf: Refactor prog_tests logging and test execution

This is a pre-req to add separate logging for each subtest in
test_progs.

Move all the mutable test data to the test_result struct.
Move per-test init/de-init into the run_one_test function.
Consolidate data aggregation and final log output in
calculate_and_print_summary function.
As a side effect, this patch fixes double counting of errors
for subtests and possible duplicate output of subtest log
on failures.

Also, add prog_tests_framework.c test to verify some of the
counting logic.

As part of verification, confirmed that number of reported
tests is the same before and after the change for both parallel
and sequential test execution.

Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220418222507.1726259-1-mykolal@fb.com
2 years agoxsk: Drop ternary operator from xskq_cons_has_entries
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:15 +0000 (17:30 +0200)]
xsk: Drop ternary operator from xskq_cons_has_entries

Simplify the mentioned helper function by removing ternary operator. The
expression that is there outputs the boolean value by itself.

This helper might be used in the hot path so this simplification can
also be considered as micro optimization.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-15-maciej.fijalkowski@intel.com
2 years agoice, xsk: Avoid refilling single Rx descriptors
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:14 +0000 (17:30 +0200)]
ice, xsk: Avoid refilling single Rx descriptors

Call alloc Rx routine for ZC driver only when the amount of unallocated
descriptors exceeds given threshold.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-14-maciej.fijalkowski@intel.com
2 years agostmmac, xsk: Diversify return values from xsk_wakeup call paths
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:13 +0000 (17:30 +0200)]
stmmac, xsk: Diversify return values from xsk_wakeup call paths

Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO
return code as the case that XSK is not in the bound state. Returning
same code from ndo_xsk_wakeup can be misleading and simply makes it
harder to follow what is going on.

Change ENXIOs in stmmac's ndo_xsk_wakeup() implementation to EINVALs, so
that when probing it is clear that something is wrong on the driver
side, not the xsk_{recv,send}msg.

There is a -ENETDOWN that can happen from both kernel/driver sides
though, but I don't have a correct replacement for this on one of the
sides, so let's keep it that way.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-13-maciej.fijalkowski@intel.com
2 years agomlx5, xsk: Diversify return values from xsk_wakeup call paths
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:12 +0000 (17:30 +0200)]
mlx5, xsk: Diversify return values from xsk_wakeup call paths

Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO
return code as the case that XSK is not in the bound state. Returning
same code from ndo_xsk_wakeup can be misleading and simply makes it
harder to follow what is going on.

Change ENXIO in mlx5's ndo_xsk_wakeup() implementation to EINVAL, so
that when probing it is clear that something is wrong on the driver
side, not the xsk_{recv,send}msg.

There is a -ENETDOWN that can happen from both kernel/driver sides
though, but I don't have a correct replacement for this on one of the
sides, so let's keep it that way.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-12-maciej.fijalkowski@intel.com
2 years agoixgbe, xsk: Diversify return values from xsk_wakeup call paths
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:11 +0000 (17:30 +0200)]
ixgbe, xsk: Diversify return values from xsk_wakeup call paths

Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO
return code as the case that XSK is not in the bound state. Returning
same code from ndo_xsk_wakeup can be misleading and simply makes it
harder to follow what is going on.

Change ENXIOs in ixgbe's ndo_xsk_wakeup() implementation to EINVALs, so
that when probing it is clear that something is wrong on the driver
side, not the xsk_{recv,send}msg.

There is a -ENETDOWN that can happen from both kernel/driver sides
though, but I don't have a correct replacement for this on one of the
sides, so let's keep it that way.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-11-maciej.fijalkowski@intel.com
2 years agoi40e, xsk: Diversify return values from xsk_wakeup call paths
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:10 +0000 (17:30 +0200)]
i40e, xsk: Diversify return values from xsk_wakeup call paths

Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO
return code as the case that XSK is not in the bound state. Returning
same code from ndo_xsk_wakeup can be misleading and simply makes it
harder to follow what is going on.

Change ENXIOs in i40e's ndo_xsk_wakeup() implementation to EINVALs, so
that when probing it is clear that something is wrong on the driver
side, not the xsk_{recv,send}msg.

There is a -ENETDOWN that can happen from both kernel/driver sides
though, but I don't have a correct replacement for this on one of the
sides, so let's keep it that way.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-10-maciej.fijalkowski@intel.com
2 years agoice, xsk: Diversify return values from xsk_wakeup call paths
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:09 +0000 (17:30 +0200)]
ice, xsk: Diversify return values from xsk_wakeup call paths

Currently, when debugging AF_XDP workloads, one can correlate the -ENXIO
return code as the case that XSK is not in the bound state. Returning
same code from ndo_xsk_wakeup can be misleading and simply makes it
harder to follow what is going on.

Change ENXIOs in ice's ndo_xsk_wakeup() implementation to EINVALs, so
that when probing it is clear that something is wrong on the driver
side, not the xsk_{recv,send}msg.

There is a -ENETDOWN that can happen from both kernel/driver sides
though, but I don't have a correct replacement for this on one of the
sides, so let's keep it that way.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-9-maciej.fijalkowski@intel.com
2 years agoixgbe, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:08 +0000 (17:30 +0200)]
ixgbe, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full

When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was
returned from xdp_do_redirect() with a XSK Rx queue being full. In such
case, terminate the Rx processing that is being done on the current HW
Rx ring and let the user space consume descriptors from XSK Rx queue so
that there is room that driver can use later on.

Introduce new internal return code IXGBE_XDP_EXIT that will indicate case
described above.

Note that it does not affect Tx processing that is bound to the same
NAPI context, nor the other Rx rings.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-8-maciej.fijalkowski@intel.com
2 years agoi40e, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:07 +0000 (17:30 +0200)]
i40e, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full

When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was
returned from xdp_do_redirect() with a XSK Rx queue being full. In such
case, terminate the Rx processing that is being done on the current HW
Rx ring and let the user space consume descriptors from XSK Rx queue so
that there is room that driver can use later on.

Introduce new internal return code I40E_XDP_EXIT that will indicate case
described above.

Note that it does not affect Tx processing that is bound to the same
NAPI context, nor the other Rx rings.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-7-maciej.fijalkowski@intel.com
2 years agoice, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:06 +0000 (17:30 +0200)]
ice, xsk: Terminate Rx side of NAPI when XSK Rx queue gets full

When XSK pool uses need_wakeup feature, correlate -ENOBUFS that was
returned from xdp_do_redirect() with a XSK Rx queue being full. In such
case, terminate the Rx processing that is being done on the current HW
Rx ring and let the user space consume descriptors from XSK Rx queue so
that there is room that driver can use later on.

Introduce new internal return code ICE_XDP_EXIT that will indicate case
described above.

Note that it does not affect Tx processing that is bound to the same
NAPI context, nor the other Rx rings.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-6-maciej.fijalkowski@intel.com
2 years agoixgbe, xsk: Decorate IXGBE_XDP_REDIR with likely()
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:05 +0000 (17:30 +0200)]
ixgbe, xsk: Decorate IXGBE_XDP_REDIR with likely()

ixgbe_run_xdp_zc() suggests to compiler that XDP_REDIRECT is the most
probable action returned from BPF program that AF_XDP has in its
pipeline. Let's also bring this suggestion up to the callsite of
ixgbe_run_xdp_zc() so that compiler will be able to generate more
optimized code which in turn will make branch predictor happy.

Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-5-maciej.fijalkowski@intel.com
2 years agoice, xsk: Decorate ICE_XDP_REDIR with likely()
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:04 +0000 (17:30 +0200)]
ice, xsk: Decorate ICE_XDP_REDIR with likely()

ice_run_xdp_zc() suggests to compiler that XDP_REDIRECT is the most
probable action returned from BPF program that AF_XDP has in its
pipeline. Let's also bring this suggestion up to the callsite of
ice_run_xdp_zc() so that compiler will be able to generate more
optimized code which in turn will make branch predictor happy.

Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413153015.453864-4-maciej.fijalkowski@intel.com
2 years agoxsk: Diversify return codes in xsk_rcv_check()
Maciej Fijalkowski [Wed, 13 Apr 2022 15:30:03 +0000 (17:30 +0200)]
xsk: Diversify return codes in xsk_rcv_check()

Inspired by patch that made xdp_do_redirect() return values for XSKMAP
more meaningful, return -ENXIO instead of -EINVAL for socket being
unbound in xsk_rcv_check() as this is the usual value that is returned
for such event. In turn, it is now possible to easily distinguish what
went wrong, which is a bit harder when for both cases checked, -EINVAL
was returned.

Return codes can be counted in a nice way via bpftrace oneliner that
Jesper has shown:

bpftrace -e 'tracepoint:xdp:xdp_redirect* {@err[-args->err] = count();}'

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220413153015.453864-3-maciej.fijalkowski@intel.com
2 years agoxsk: Improve xdp_do_redirect() error codes
Björn Töpel [Wed, 13 Apr 2022 15:30:02 +0000 (17:30 +0200)]
xsk: Improve xdp_do_redirect() error codes

The error codes returned by xdp_do_redirect() when redirecting a frame
to an AF_XDP socket has not been very useful. A driver could not
distinguish between different errors. Prior this change the following
codes where used:

Socket not bound or incorrect queue/netdev: EINVAL
XDP frame/AF_XDP buffer size mismatch: ENOSPC
Could not allocate buffer (copy mode): ENOSPC
AF_XDP Rx buffer full: ENOSPC

After this change:

Socket not bound or incorrect queue/netdev: EINVAL
XDP frame/AF_XDP buffer size mismatch: ENOSPC
Could not allocate buffer (copy mode): ENOMEM
AF_XDP Rx buffer full: ENOBUFS

An AF_XDP zero-copy driver can now potentially determine if the
failure was due to a full Rx buffer, and if so stop processing more
frames, yielding to the userland AF_XDP application.

Signed-off-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Link: https://lore.kernel.org/bpf/20220413153015.453864-2-maciej.fijalkowski@intel.com
2 years agobpf: Remove unnecessary type castings
Yu Zhe [Wed, 13 Apr 2022 01:50:48 +0000 (18:50 -0700)]
bpf: Remove unnecessary type castings

Remove/clean up unnecessary void * type castings.

Signed-off-by: Yu Zhe <yuzhe@nfschina.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220413015048.12319-1-yuzhe@nfschina.com
2 years agoMerge branch 'pr/bpf-sysctl' into bpf-next
Daniel Borkmann [Wed, 13 Apr 2022 19:37:57 +0000 (21:37 +0200)]
Merge branch 'pr/bpf-sysctl' into bpf-next

Pull the migration of the BPF sysctl handling into BPF core. This work is
needed in both sysctl-next and bpf-next tree.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Link: https://lore.kernel.org/bpf/b615bd44-6bd1-a958-7e3f-dd2ff58931a1@iogearbox.net
2 years agobpf: Move BPF sysctls from kernel/sysctl.c to BPF core
Yan Zhu [Thu, 7 Apr 2022 07:07:59 +0000 (15:07 +0800)]
bpf: Move BPF sysctls from kernel/sysctl.c to BPF core

We're moving sysctls out of kernel/sysctl.c as it is a mess. We
already moved all filesystem sysctls out. And with time the goal
is to move all sysctls out to their own subsystem/actual user.

kernel/sysctl.c has grown to an insane mess and its easy to run
into conflicts with it. The effort to move them out into various
subsystems is part of this.

Signed-off-by: Yan Zhu <zhuyan34@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Link: https://lore.kernel.org/bpf/20220407070759.29506-1-zhuyan34@huawei.com
2 years agolibbpf: Usdt aarch64 arg parsing support
Alan Maguire [Mon, 11 Apr 2022 15:21:36 +0000 (16:21 +0100)]
libbpf: Usdt aarch64 arg parsing support

Parsing of USDT arguments is architecture-specific. On aarch64 it is
relatively easy since registers used are x[0-31] and sp. Format is
slightly different compared to x86_64. Possible forms are:

- "size@[reg[,offset]]" for dereferences, e.g. "-8@[sp,76]" and "-4@[sp]";
- "size@reg" for register values, e.g. "-4@x0";
- "size@value" for raw values, e.g. "-8@1".

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1649690496-1902-2-git-send-email-alan.maguire@oracle.com
2 years agobpf: Remove redundant assignment to meta.seq in __task_seq_show()
Yuntao Wang [Sun, 10 Apr 2022 06:00:19 +0000 (14:00 +0800)]
bpf: Remove redundant assignment to meta.seq in __task_seq_show()

The seq argument is assigned to meta.seq twice, the second one is
redundant, remove it.

This patch also removes a redundant space in bpf_iter_link_attach().

Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20220410060020.307283-1-ytcoode@gmail.com
2 years agoselftests/bpf: Drop duplicate max/min definitions
Geliang Tang [Fri, 8 Apr 2022 23:58:17 +0000 (07:58 +0800)]
selftests/bpf: Drop duplicate max/min definitions

Drop duplicate macros min() and MAX() definitions in prog_tests and use
MIN() or MAX() in sys/param.h instead.

Signed-off-by: Geliang Tang <geliang.tang@suse.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/1ae276da9925c2de59b5bdc93b693b4c243e692e.1649462033.git.geliang.tang@suse.com
2 years agoriscv, bpf: Implement more atomic operations for RV64
Pu Lehui [Sun, 10 Apr 2022 10:12:46 +0000 (18:12 +0800)]
riscv, bpf: Implement more atomic operations for RV64

This patch implement more BPF atomic operations for RV64. The newly
added operations are shown below:

  atomic[64]_[fetch_]add
  atomic[64]_[fetch_]and
  atomic[64]_[fetch_]or
  atomic[64]_xchg
  atomic[64]_cmpxchg

Since riscv specification does not provide AMO instruction for CAS
operation, we use lr/sc instruction for cmpxchg operation, and AMO
instructions for the rest ops.

Tests "test_bpf.ko" and "test_progs -t atomic" have passed, as well
as "test_verifier" with no new failure cases.

Signed-off-by: Pu Lehui <pulehui@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Link: https://lore.kernel.org/bpf/20220410101246.232875-1-pulehui@huawei.com
2 years agoMerge branch 'bpf: RLIMIT_MEMLOCK cleanups'
Andrii Nakryiko [Mon, 11 Apr 2022 03:17:16 +0000 (20:17 -0700)]
Merge branch 'bpf: RLIMIT_MEMLOCK cleanups'

Yafang Shao says:

====================

We have switched to memcg-based memory accouting and thus the rlimit is
not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in
libbpf for backward compatibility, so we can use it instead now.

This patchset cleanups the usage of RLIMIT_MEMLOCK in tools/bpf/,
tools/testing/selftests/bpf and samples/bpf. The file
tools/testing/selftests/bpf/bpf_rlimit.h is removed. The included header
sys/resource.h is removed from many files as it is useless in these files.

- v4: Squash patches and use customary subject prefixes. (Andrii)
- v3: Get rid of bpf_rlimit.h and fix some typos (Andrii)
- v2: Use libbpf_set_strict_mode instead. (Andrii)
- v1: https://lore.kernel.org/bpf/20220320060815.7716-2-laoar.shao@gmail.com/
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2 years agotools/runqslower: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK
Yafang Shao [Sat, 9 Apr 2022 12:59:58 +0000 (12:59 +0000)]
tools/runqslower: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK

Explicitly set libbpf 1.0 API mode, then we can avoid using the deprecated
RLIMIT_MEMLOCK.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409125958.92629-5-laoar.shao@gmail.com
2 years agobpftool: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK
Yafang Shao [Sat, 9 Apr 2022 12:59:57 +0000 (12:59 +0000)]
bpftool: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK

We have switched to memcg-based memory accouting and thus the rlimit is
not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in
libbpf for backward compatibility, so we can use it instead now.

libbpf_set_strict_mode always return 0, so we don't need to check whether
the return value is 0 or not.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409125958.92629-4-laoar.shao@gmail.com
2 years agoselftests/bpf: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK
Yafang Shao [Sat, 9 Apr 2022 12:59:56 +0000 (12:59 +0000)]
selftests/bpf: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK

We have switched to memcg-based memory accouting and thus the rlimit is
not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in
libbpf for backward compatibility, so we can use it instead now. After
this change, the header tools/testing/selftests/bpf/bpf_rlimit.h can be
removed.

This patch also removes the useless header sys/resource.h from many files
in tools/testing/selftests/bpf/.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409125958.92629-3-laoar.shao@gmail.com
2 years agosamples/bpf: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK
Yafang Shao [Sat, 9 Apr 2022 12:59:55 +0000 (12:59 +0000)]
samples/bpf: Use libbpf 1.0 API mode instead of RLIMIT_MEMLOCK

We have switched to memcg-based memory accouting and thus the rlimit is
not needed any more. LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK was introduced in
libbpf for backward compatibility, so we can use it instead now.

This patch also removes the useless header sys/resource.h from many files
in samples/bpf.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409125958.92629-2-laoar.shao@gmail.com
2 years agolibbpf: Fix a bug with checking bpf_probe_read_kernel() support in old kernels
Runqing Yang [Sat, 9 Apr 2022 14:49:28 +0000 (22:49 +0800)]
libbpf: Fix a bug with checking bpf_probe_read_kernel() support in old kernels

Background:
Libbpf automatically replaces calls to BPF bpf_probe_read_{kernel,user}
[_str]() helpers with bpf_probe_read[_str](), if libbpf detects that
kernel doesn't support new APIs. Specifically, libbpf invokes the
probe_kern_probe_read_kernel function to load a small eBPF program into
the kernel in which bpf_probe_read_kernel API is invoked and lets the
kernel checks whether the new API is valid. If the loading fails, libbpf
considers the new API invalid and replaces it with the old API.

static int probe_kern_probe_read_kernel(void)
{
struct bpf_insn insns[] = {
BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
BPF_EXIT_INSN(),
};
int fd, insn_cnt = ARRAY_SIZE(insns);

fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL,
                           "GPL", insns, insn_cnt, NULL);
return probe_fd(fd);
}

Bug:
On older kernel versions [0], the kernel checks whether the version
number provided in the bpf syscall, matches the LINUX_VERSION_CODE.
If not matched, the bpf syscall fails. eBPF However, the
probe_kern_probe_read_kernel code does not set the kernel version
number provided to the bpf syscall, which causes the loading process
alwasys fails for old versions. It means that libbpf will replace the
new API with the old one even the kernel supports the new one.

Solution:
After a discussion in [1], the solution is using BPF_PROG_TYPE_TRACEPOINT
program type instead of BPF_PROG_TYPE_KPROBE because kernel does not
enfoce version check for tracepoint programs. I test the patch in old
kernels (4.18 and 4.19) and it works well.

  [0] https://elixir.bootlin.com/linux/v4.19/source/kernel/bpf/syscall.c#L1360
  [1] Closes: https://github.com/libbpf/libbpf/issues/473

Signed-off-by: Runqing Yang <rainkin1993@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409144928.27499-1-rainkin1993@gmail.com
2 years agoselftests/bpf: Improve by-name subtest selection logic in prog_tests
Mykola Lysenko [Sat, 9 Apr 2022 00:17:49 +0000 (17:17 -0700)]
selftests/bpf: Improve by-name subtest selection logic in prog_tests

Improve subtest selection logic when using -t/-a/-d parameters.
In particular, more than one subtest can be specified or a
combination of tests / subtests.

-a send_signal -d send_signal/send_signal_nmi* - runs send_signal
test without nmi tests

-a send_signal/send_signal_nmi*,find_vma - runs two send_signal
subtests and find_vma test

-a 'send_signal*' -a find_vma -d send_signal/send_signal_nmi* -
runs 2 send_signal test and find_vma test. Disables two send_signal
nmi subtests

-t send_signal -t find_vma - runs two *send_signal* tests and one
*find_vma* test

This will allow us to have granular control over which subtests
to disable in the CI system instead of disabling whole tests.

Also, add new selftest to avoid possible regression when
changing prog_test test name selection logic.

Signed-off-by: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220409001750.529930-1-mykolal@fb.com
2 years agolibbpf: Add ARC support to bpf_tracing.h
Vladimir Isaev [Fri, 8 Apr 2022 22:44:42 +0000 (01:44 +0300)]
libbpf: Add ARC support to bpf_tracing.h

Add PT_REGS macros suitable for ARCompact and ARCv2.

Signed-off-by: Vladimir Isaev <isaev@synopsys.com>
Signed-off-by: Sergey Matyukevich <geomatsi@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20220408224442.599566-1-geomatsi@gmail.com
2 years agoMerge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Jakub Kicinski [Sat, 9 Apr 2022 00:07:29 +0000 (17:07 -0700)]
Merge https://git./linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2022-04-09

We've added 63 non-merge commits during the last 9 day(s) which contain
a total of 68 files changed, 4852 insertions(+), 619 deletions(-).

The main changes are:

1) Add libbpf support for USDT (User Statically-Defined Tracing) probes.
   USDTs are an abstraction built on top of uprobes, critical for tracing
   and BPF, and widely used in production applications, from Andrii Nakryiko.

2) While Andrii was adding support for x86{-64}-specific logic of parsing
   USDT argument specification, Ilya followed-up with USDT support for s390
   architecture, from Ilya Leoshkevich.

3) Support name-based attaching for uprobe BPF programs in libbpf. The format
   supported is `u[ret]probe/binary_path:[raw_offset|function[+offset]]`, e.g.
   attaching to libc malloc can be done in BPF via SEC("uprobe/libc.so.6:malloc")
   now, from Alan Maguire.

4) Various load/store optimizations for the arm64 JIT to shrink the image
   size by using arm64 str/ldr immediate instructions. Also enable pointer
   authentication to verify return address for JITed code, from Xu Kuohai.

5) BPF verifier fixes for write access checks to helper functions, e.g.
   rd-only memory from bpf_*_cpu_ptr() must not be passed to helpers that
   write into passed buffers, from Kumar Kartikeya Dwivedi.

6) Fix overly excessive stack map allocation for its base map structure and
   buckets which slipped-in from cleanups during the rlimit accounting removal
   back then, from Yuntao Wang.

7) Extend the unstable CT lookup helpers for XDP and tc/BPF to report netfilter
   connection tracking tuple direction, from Lorenzo Bianconi.

8) Improve bpftool dump to show BPF program/link type names, Milan Landaverde.

9) Minor cleanups all over the place from various others.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (63 commits)
  bpf: Fix excessive memory allocation in stack_map_alloc()
  selftests/bpf: Fix return value checks in perf_event_stackmap test
  selftests/bpf: Add CO-RE relos into linked_funcs selftests
  libbpf: Use weak hidden modifier for USDT BPF-side API functions
  libbpf: Don't error out on CO-RE relos for overriden weak subprogs
  samples, bpf: Move routes monitor in xdp_router_ipv4 in a dedicated thread
  libbpf: Allow WEAK and GLOBAL bindings during BTF fixup
  libbpf: Use strlcpy() in path resolution fallback logic
  libbpf: Add s390-specific USDT arg spec parsing logic
  libbpf: Make BPF-side of USDT support work on big-endian machines
  libbpf: Minor style improvements in USDT code
  libbpf: Fix use #ifdef instead of #if to avoid compiler warning
  libbpf: Potential NULL dereference in usdt_manager_attach_usdt()
  selftests/bpf: Uprobe tests should verify param/return values
  libbpf: Improve string parsing for uprobe auto-attach
  libbpf: Improve library identification for uprobe binary path resolution
  selftests/bpf: Test for writes to map key from BPF helpers
  selftests/bpf: Test passing rdonly mem to global func
  bpf: Reject writes for PTR_TO_MAP_KEY in check_helper_mem_access
  bpf: Check PTR_TO_MEM | MEM_RDONLY in check_helper_mem_access
  ...
====================

Link: https://lore.kernel.org/r/20220408231741.19116-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agobpf: Fix excessive memory allocation in stack_map_alloc()
Yuntao Wang [Thu, 7 Apr 2022 13:04:23 +0000 (21:04 +0800)]
bpf: Fix excessive memory allocation in stack_map_alloc()

The 'n_buckets * (value_size + sizeof(struct stack_map_bucket))' part of the
allocated memory for 'smap' is never used after the memlock accounting was
removed, thus get rid of it.

[ Note, Daniel:

Commit b936ca643ade ("bpf: rework memlock-based memory accounting for maps")
moved `cost += n_buckets * (value_size + sizeof(struct stack_map_bucket))`
up and therefore before the bpf_map_area_alloc() allocation, sigh. In a later
step commit c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()"),
and the overflow checks of `cost >= U32_MAX - PAGE_SIZE` moved into
bpf_map_charge_init(). And then 370868107bf6 ("bpf: Eliminate rlimit-based
memory accounting for stackmap maps") finally removed the bpf_map_charge_init().
Anyway, the original code did the allocation same way as /after/ this fix. ]

Fixes: b936ca643ade ("bpf: rework memlock-based memory accounting for maps")
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220407130423.798386-1-ytcoode@gmail.com
2 years agosfc: use hardware tx timestamps for more than PTP
Bert Kenward [Thu, 7 Apr 2022 15:24:02 +0000 (16:24 +0100)]
sfc: use hardware tx timestamps for more than PTP

The 8000 series and newer NICs all get hardware timestamps from the MAC
 and can provide timestamps on a normal TX queue, rather than via a slow
 path through the MC. As such we can use this path for any packet where a
 hardware timestamp is requested.
This also enables support for PTP over transports other than IPv4+UDP.

Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: Edward Cree <ecree@xilinx.com>
Link: https://lore.kernel.org/r/510652dc-54b4-0e11-657e-e37ee3ca26a9@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agonet: phy: micrel: ksz9031/ksz9131: add cabletest support
Marek Vasut [Thu, 7 Apr 2022 10:55:34 +0000 (12:55 +0200)]
net: phy: micrel: ksz9031/ksz9131: add cabletest support

Add cable test support for Micrel KSZ9x31 PHYs.

Tested on i.MX8M Mini with KSZ9131RNX in 100/Full mode with pairs shuffled
before magnetics:
(note: Cable test started/completed messages are omitted)

  mx8mm-ksz9131-a-d-connected$ ethtool --cable-test eth0
  Pair A code OK
  Pair B code Short within Pair
  Pair B, fault length: 0.80m
  Pair C code Short within Pair
  Pair C, fault length: 0.80m
  Pair D code OK

  mx8mm-ksz9131-a-b-connected$ ethtool --cable-test eth0
  Pair A code OK
  Pair B code OK
  Pair C code Short within Pair
  Pair C, fault length: 0.00m
  Pair D code Short within Pair
  Pair D, fault length: 0.00m

Tested on R8A77951 Salvator-XS with KSZ9031RNX and all four pairs connected:
(note: Cable test started/completed messages are omitted)

  r8a7795-ksz9031-all-connected$ ethtool --cable-test eth0
  Pair A code OK
  Pair B code OK
  Pair C code OK
  Pair D code OK

The CTRL1000 CTL1000_ENABLE_MASTER and CTL1000_AS_MASTER bits are not
restored by calling phy_init_hw(), they must be manually cached in
ksz9x31_cable_test_start() and restored at the end of
ksz9x31_cable_test_get_status().

Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Cc: Oleksij Rempel <linux@rempel-privat.de>
Cc: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Link: https://lore.kernel.org/r/20220407105534.85833-1-marex@denx.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoselftests/bpf: Fix return value checks in perf_event_stackmap test
Yuntao Wang [Fri, 8 Apr 2022 04:14:52 +0000 (12:14 +0800)]
selftests/bpf: Fix return value checks in perf_event_stackmap test

The bpf_get_stackid() function may also return 0 on success as per UAPI BPF
helper documentation. Therefore, correct checks from 'val > 0' to 'val >= 0'
to ensure that they cover all possible success return values.

Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220408041452.933944-1-ytcoode@gmail.com
2 years agoselftests/bpf: Add CO-RE relos into linked_funcs selftests
Andrii Nakryiko [Fri, 8 Apr 2022 18:14:25 +0000 (11:14 -0700)]
selftests/bpf: Add CO-RE relos into linked_funcs selftests

Add CO-RE relocations into __weak subprogs for multi-file linked_funcs
selftest to make sure libbpf handles such combination well.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220408181425.2287230-4-andrii@kernel.org
2 years agolibbpf: Use weak hidden modifier for USDT BPF-side API functions
Andrii Nakryiko [Fri, 8 Apr 2022 18:14:24 +0000 (11:14 -0700)]
libbpf: Use weak hidden modifier for USDT BPF-side API functions

Use __weak __hidden for bpf_usdt_xxx() APIs instead of much more
confusing `static inline __noinline`. This was previously impossible due
to libbpf erroring out on CO-RE relocations pointing to eliminated weak
subprogs. Now that previous patch fixed this issue, switch back to
__weak __hidden as it's a more direct way of specifying the desired
behavior.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220408181425.2287230-3-andrii@kernel.org
2 years agolibbpf: Don't error out on CO-RE relos for overriden weak subprogs
Andrii Nakryiko [Fri, 8 Apr 2022 18:14:23 +0000 (11:14 -0700)]
libbpf: Don't error out on CO-RE relos for overriden weak subprogs

During BPF static linking, all the ELF relocations and .BTF.ext
information (including CO-RE relocations) are preserved for __weak
subprograms that were logically overriden by either previous weak
subprogram instance or by corresponding "strong" (non-weak) subprogram.
This is just how native user-space linkers work, nothing new.

But libbpf is over-zealous when processing CO-RE relocation to error out
when CO-RE relocation belonging to such eliminated weak subprogram is
encountered. Instead of erroring out on this expected situation, log
debug-level message and skip the relocation.

Fixes: db2b8b06423c ("libbpf: Support CO-RE relocations for multi-prog sections")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220408181425.2287230-2-andrii@kernel.org
2 years agosamples, bpf: Move routes monitor in xdp_router_ipv4 in a dedicated thread
Lorenzo Bianconi [Tue, 5 Apr 2022 14:15:14 +0000 (16:15 +0200)]
samples, bpf: Move routes monitor in xdp_router_ipv4 in a dedicated thread

In order to not miss any netlink message from the kernel, move routes
monitor to a dedicated thread.

Fixes: 85bf1f51691c ("samples: bpf: Convert xdp_router_ipv4 to XDP samples helper")
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/e364b817c69ded73be24b677ab47a157f7c21b64.1649167911.git.lorenzo@kernel.org
2 years agolibbpf: Allow WEAK and GLOBAL bindings during BTF fixup
Andrii Nakryiko [Thu, 7 Apr 2022 23:04:46 +0000 (16:04 -0700)]
libbpf: Allow WEAK and GLOBAL bindings during BTF fixup

During BTF fix up for global variables, global variable can be global
weak and will have STB_WEAK binding in ELF. Support such global
variables in addition to non-weak ones.

This is not the problem when using BPF static linking, as BPF static
linker "fixes up" BTF during generation so that libbpf doesn't have to
do it anymore during bpf_object__open(), which led to this not being
noticed for a while, along with a pretty rare (currently) use of __weak
variables and maps.

Reported-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220407230446.3980075-2-andrii@kernel.org
2 years agolibbpf: Use strlcpy() in path resolution fallback logic
Andrii Nakryiko [Thu, 7 Apr 2022 23:04:45 +0000 (16:04 -0700)]
libbpf: Use strlcpy() in path resolution fallback logic

Coverity static analyzer complains that strcpy() can cause buffer
overflow. Use libbpf_strlcpy() instead to be 100% sure this doesn't
happen.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220407230446.3980075-1-andrii@kernel.org
2 years agoMerge branch 'Add USDT support for s390'
Andrii Nakryiko [Fri, 8 Apr 2022 03:59:12 +0000 (20:59 -0700)]
Merge branch 'Add USDT support for s390'

Ilya Leoshkevich says:

====================

This series adds USDT support for s390, making the "usdt" test pass
there. Patch 1 is a collection of minor cleanups, patch 2 adds
BPF-side support, patch 3 adds userspace-side support.
====================

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
2 years agolibbpf: Add s390-specific USDT arg spec parsing logic
Ilya Leoshkevich [Thu, 7 Apr 2022 21:44:11 +0000 (23:44 +0200)]
libbpf: Add s390-specific USDT arg spec parsing logic

The logic is superficially similar to that of x86, but the small
differences (no need for register table and dynamic allocation of
register names, no $ sign before constants) make maintaining a common
implementation too burdensome. Therefore simply add a s390x-specific
version of parse_usdt_arg().

Note that while bcc supports index registers, this patch does not. This
should not be a problem in most cases, since s390 uses a default value
"nor" for STAP_SDT_ARG_CONSTRAINT.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220407214411.257260-4-iii@linux.ibm.com
2 years agoMerge branch 'net-sched-offload-failure-error-reporting'
David S. Miller [Fri, 8 Apr 2022 12:45:44 +0000 (13:45 +0100)]
Merge branch 'net-sched-offload-failure-error-reporting'

Ido Schimmel says:

====================
net/sched: Better error reporting for offload failures

This patchset improves error reporting to user space when offload fails
during the flow action setup phase. That is, when failures occur in the
actions themselves, even before calling device drivers. Requested /
reported in [1].

This is done by passing extack to the offload_act_setup() callback and
making use of it in the various actions.

Patches #1-#2 change matchall and flower to log error messages to user
space in accordance with the verbose flag.

Patch #3 passes extack to the offload_act_setup() callback from the
various call sites, including matchall and flower.

Patches #4-#11 make use of extack in the various actions to report
offload failures.

Patch #12 adds an error message when the action does not support offload
at all.

Patches #13-#14 change matchall and flower to stop overwriting more
specific error messages.

[1] https://lore.kernel.org/netdev/20220317185249.5mff5u2x624pjewv@skbuf/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: flower: Avoid overwriting error messages
Ido Schimmel [Thu, 7 Apr 2022 07:35:33 +0000 (10:35 +0300)]
net/sched: flower: Avoid overwriting error messages

The various error paths of tc_setup_offload_action() now report specific
error messages. Remove the generic messages to avoid overwriting the
more specific ones.

Before:

 # tc filter add dev dummy0 ingress pref 1 proto ip flower skip_sw dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
 Error: cls_flower: Failed to setup flow action.
 We have an error talking to the kernel

After:

 # tc filter add dev dummy0 ingress pref 1 proto ip flower skip_sw dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
 Error: act_police: Offload not supported when conform/exceed action is "reclassify".
 We have an error talking to the kernel

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: matchall: Avoid overwriting error messages
Ido Schimmel [Thu, 7 Apr 2022 07:35:32 +0000 (10:35 +0300)]
net/sched: matchall: Avoid overwriting error messages

The various error paths of tc_setup_offload_action() now report specific
error messages. Remove the generic messages to avoid overwriting the
more specific ones.

Before:

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action police rate 100Mbit burst 10000
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

After:

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action police rate 100Mbit burst 10000
 Error: act_police: Offload not supported when conform/exceed action is "reclassify".
 We have an error talking to the kernel

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: cls_api: Add extack message for unsupported action offload
Ido Schimmel [Thu, 7 Apr 2022 07:35:31 +0000 (10:35 +0300)]
net/sched: cls_api: Add extack message for unsupported action offload

For better error reporting to user space, add an extack message when the
requested action does not support offload.

Example:

 # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action nat ingress 192.0.2.1 198.51.100.1
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-181     [000] b..1.    88.406093: netlink_extack: msg=Action does not support offload
       tc-181     [000] .....    88.406108: netlink_extack: msg=cls_matchall: Failed to setup flow action

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_vlan: Add extack message for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:30 +0000 (10:35 +0300)]
net/sched: act_vlan: Add extack message for offload failure

For better error reporting to user space, add an extack message when
vlan action offload fails.

Currently, the failure cannot be triggered, but add a message in case
the action is extended in the future to support more than the current
set of modes.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_tunnel_key: Add extack message for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:29 +0000 (10:35 +0300)]
net/sched: act_tunnel_key: Add extack message for offload failure

For better error reporting to user space, add an extack message when
tunnel_key action offload fails.

Currently, the failure cannot be triggered, but add a message in case
the action is extended in the future to support more than set/release
modes.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_skbedit: Add extack messages for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:28 +0000 (10:35 +0300)]
net/sched: act_skbedit: Add extack messages for offload failure

For better error reporting to user space, add extack messages when
skbedit action offload fails.

Example:

 # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action skbedit queue_mapping 1234
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-185     [002] b..1.    31.802414: netlink_extack: msg=act_skbedit: Offload not supported when "queue_mapping" option is used
       tc-185     [002] .....    31.802418: netlink_extack: msg=cls_matchall: Failed to setup flow action

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action skbedit inheritdsfield
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-187     [002] b..1.    45.985145: netlink_extack: msg=act_skbedit: Offload not supported when "inheritdsfield" option is used
       tc-187     [002] .....    45.985160: netlink_extack: msg=cls_matchall: Failed to setup flow action

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_police: Add extack messages for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:27 +0000 (10:35 +0300)]
net/sched: act_police: Add extack messages for offload failure

For better error reporting to user space, add extack messages when
police action offload fails.

Example:

 # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action police rate 100Mbit burst 10000
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-182     [000] b..1.    21.592969: netlink_extack: msg=act_police: Offload not supported when conform/exceed action is "reclassify"
       tc-182     [000] .....    21.592982: netlink_extack: msg=cls_matchall: Failed to setup flow action

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action police rate 100Mbit burst 10000 conform-exceed drop/continue
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-184     [000] b..1.    38.882579: netlink_extack: msg=act_police: Offload not supported when conform/exceed action is "continue"
       tc-184     [000] .....    38.882593: netlink_extack: msg=cls_matchall: Failed to setup flow action

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_pedit: Add extack message for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:26 +0000 (10:35 +0300)]
net/sched: act_pedit: Add extack message for offload failure

For better error reporting to user space, add an extack message when
pedit action offload fails.

Currently, the failure cannot be triggered, but add a message in case
the action is extended in the future to support more than set/add
commands.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_mpls: Add extack messages for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:25 +0000 (10:35 +0300)]
net/sched: act_mpls: Add extack messages for offload failure

For better error reporting to user space, add extack messages when mpls
action offload fails.

Example:

 # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action mpls dec_ttl
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-182     [000] b..1.    18.693915: netlink_extack: msg=act_mpls: Offload not supported when "dec_ttl" option is used
       tc-182     [000] .....    18.693921: netlink_extack: msg=cls_matchall: Failed to setup flow action

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_mirred: Add extack message for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:24 +0000 (10:35 +0300)]
net/sched: act_mirred: Add extack message for offload failure

For better error reporting to user space, add an extack message when
mirred action offload fails.

Currently, the failure cannot be triggered, but add a message in case
the action is extended in the future to support more than ingress/egress
mirror/redirect.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_gact: Add extack messages for offload failure
Ido Schimmel [Thu, 7 Apr 2022 07:35:23 +0000 (10:35 +0300)]
net/sched: act_gact: Add extack messages for offload failure

For better error reporting to user space, add extack messages when gact
action offload fails.

Example:

 # echo 1 > /sys/kernel/tracing/events/netlink/netlink_extack/enable

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action continue
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-181     [002] b..1.   105.493450: netlink_extack: msg=act_gact: Offload of "continue" action is not supported
       tc-181     [002] .....   105.493466: netlink_extack: msg=cls_matchall: Failed to setup flow action

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action reclassify
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-183     [002] b..1.   124.126477: netlink_extack: msg=act_gact: Offload of "reclassify" action is not supported
       tc-183     [002] .....   124.126489: netlink_extack: msg=cls_matchall: Failed to setup flow action

 # tc filter add dev dummy0 ingress pref 1 proto all matchall skip_sw action pipe action drop
 Error: cls_matchall: Failed to setup flow action.
 We have an error talking to the kernel

 # cat /sys/kernel/tracing/trace_pipe
       tc-185     [002] b..1.   137.097791: netlink_extack: msg=act_gact: Offload of "pipe" action is not supported
       tc-185     [002] .....   137.097804: netlink_extack: msg=cls_matchall: Failed to setup flow action

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: act_api: Add extack to offload_act_setup() callback
Ido Schimmel [Thu, 7 Apr 2022 07:35:22 +0000 (10:35 +0300)]
net/sched: act_api: Add extack to offload_act_setup() callback

The callback is used by various actions to populate the flow action
structure prior to offload. Pass extack to this callback so that the
various actions will be able to report accurate error messages to user
space.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: flower: Take verbose flag into account when logging error messages
Ido Schimmel [Thu, 7 Apr 2022 07:35:21 +0000 (10:35 +0300)]
net/sched: flower: Take verbose flag into account when logging error messages

The verbose flag was added in commit 81c7288b170a ("sched: cls: enable
verbose logging") to avoid suppressing logging of error messages that
occur "when the rule is not to be exclusively executed by the hardware".

However, such error messages are currently suppressed when setup of flow
action fails. Take the verbose flag into account to avoid suppressing
error messages. This is done by using the extack pointer initialized by
tc_cls_common_offload_init(), which performs the necessary checks.

Before:

 # tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
 # tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000

After:

 # tc filter add dev dummy0 ingress pref 1 proto ip flower dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
 # tc filter add dev dummy0 ingress pref 2 proto ip flower verbose dst_ip 198.51.100.1 action police rate 100Mbit burst 10000
 Warning: cls_flower: Failed to setup flow action.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet/sched: matchall: Take verbose flag into account when logging error messages
Ido Schimmel [Thu, 7 Apr 2022 07:35:20 +0000 (10:35 +0300)]
net/sched: matchall: Take verbose flag into account when logging error messages

The verbose flag was added in commit 81c7288b170a ("sched: cls: enable
verbose logging") to avoid suppressing logging of error messages that
occur "when the rule is not to be exclusively executed by the hardware".

However, such error messages are currently suppressed when setup of flow
action fails. Take the verbose flag into account to avoid suppressing
error messages. This is done by using the extack pointer initialized by
tc_cls_common_offload_init(), which performs the necessary checks.

Signed-off-by: Ido Schimmel <idosch@nvidia.com>
Reviewed-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/nex
David S. Miller [Fri, 8 Apr 2022 12:41:31 +0000 (13:41 +0100)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/nex
t-queue

Tony Nguyen says:

====================
100GbE Intel Wired LAN Driver Updates 2022-04-07

Alexander Lobakin says:

This hunts down several places around packet templates/dummies for
switch rules which are either repetitive, fragile or just not
really readable code.
It's a common need to add new packet templates and to review such
changes as well, try to simplify both with the help of a pair
macros and aliases.
ice_find_dummy_packet() became very complex at this point with tons
of nested if-elses. It clearly showed this approach does not scale,
so convert its logics to the simple mask-match + static const array.

bloat-o-meter is happy about that (built w/ LLVM 13):

add/remove: 0/1 grow/shrink: 1/1 up/down: 2/-1058 (-1056)
Function                                     old     new   delta
ice_fill_adv_dummy_packet                    289     291      +2
ice_adv_add_update_vsi_list                  201       -    -201
ice_add_adv_rule                            2950    2093    -857
Total: Before=414512, After=413456, chg -0.25%
add/remove: 53/52 grow/shrink: 0/0 up/down: 4660/-3988 (672)
RO Data                                      old     new   delta
ice_dummy_pkt_profiles                         -     672    +672
Total: Before=37895, After=38567, chg +1.77%

Diffstat also looks nice, and adding new packet templates now takes
less lines.

We'll probably come out with dynamic template crafting in a while,
but for now let's improve what we have currently.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'aspeed-mdio-c45'
David S. Miller [Fri, 8 Apr 2022 11:20:52 +0000 (12:20 +0100)]
Merge branch 'aspeed-mdio-c45'

Potin Lai says:

====================
mdio: aspeed: Add Clause 45 support for Aspeed MDIO

This patch series add Clause 45 support for Aspeed MDIO driver, and
separate c22 and c45 implementation into different functions.

LINK: [v1] https://lore.kernel.org/all/20220329161949.19762-1-potin.lai@quantatw.com/
LINK: [v2] https://lore.kernel.org/all/20220406170055.28516-1-potin.lai@quantatw.com/

Changes v2 --> v3:
 - sort local variable sequence in reverse Christmas tree format.

Changes v1 --> v2:
 - add C45 to probe_capabilities
 - break one patch into 3 small patches
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mdio: aspeed: Add c45 support
Potin Lai [Thu, 7 Apr 2022 01:17:38 +0000 (09:17 +0800)]
net: mdio: aspeed: Add c45 support

Add Clause 45 support for Aspeed mdio driver.

Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mdio: aspeed: Introduce read write function for c22 and c45
Potin Lai [Thu, 7 Apr 2022 01:17:37 +0000 (09:17 +0800)]
net: mdio: aspeed: Introduce read write function for c22 and c45

Add following additional functions to move out the implementation from
aspeed_mdio_read() and aspeed_mdio_write().

c22:
 - aspeed_mdio_read_c22()
 - aspeed_mdio_write_c22()

c45:
 - aspeed_mdio_read_c45()
 - aspeed_mdio_write_c45()

Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: mdio: aspeed: move reg accessing part into separate functions
Potin Lai [Thu, 7 Apr 2022 01:17:36 +0000 (09:17 +0800)]
net: mdio: aspeed: move reg accessing part into separate functions

Add aspeed_mdio_op() and aseed_mdio_get_data() for register accessing.

aspeed_mdio_op() handles operations, write command to control register,
then check and wait operations is finished (bit 31 is cleared).

aseed_mdio_get_data() fetchs the result value of operation from data
register.

Signed-off-by: Potin Lai <potin.lai@quantatw.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agonet: atm: remove the ambassador driver
Jakub Kicinski [Wed, 6 Apr 2022 04:16:27 +0000 (21:16 -0700)]
net: atm: remove the ambassador driver

The driver for ATM Ambassador devices spews build warnings on
microblaze. The virt_to_bus() calls discard the volatile keyword.
The right thing to do would be to migrate this driver to a modern
DMA API but it seems unlikely anyone is actually using it.
There had been no fixes or functional changes here since
the git era begun.

In fact it sounds like the FW loading was broken from 2008
'til 2012 - see commit fcdc90b025e6 ("atm: forever loop loading
ambassador firmware").

Let's remove this driver, there isn't much changing in the APIs,
if users come forward we can apologize and revert.

Link: https://lore.kernel.org/all/20220321144013.440d7fc0@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'bnxt-xdp-multi-buffer'
David S. Miller [Fri, 8 Apr 2022 10:52:48 +0000 (11:52 +0100)]
Merge branch 'bnxt-xdp-multi-buffer'

Michael Chan says:

====================
bnxt: Support XDP multi buffer

This series adds XDP multi buffer support, allowing MTU to go beyond
the page size limit.

v4: Rebase with latest net-next
v3: Simplify page mode buffer size calculation
    Check to make sure XDP program supports multipage packets
v2: Fix uninitialized variable warnings in patch 1 and 10.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: XDP multibuffer enablement
Andy Gospodarek [Fri, 8 Apr 2022 07:59:06 +0000 (03:59 -0400)]
bnxt: XDP multibuffer enablement

Allow aggregation buffers to be in place in the receive path and
allow XDP programs to be attached when using a larger than 4k MTU.

v3: Add a check to sure XDP program supports multipage packets.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: support transmit and free of aggregation buffers
Andy Gospodarek [Fri, 8 Apr 2022 07:59:05 +0000 (03:59 -0400)]
bnxt: support transmit and free of aggregation buffers

This patch adds the following features:
- Support for XDP_TX and XDP_DROP action when using xdp_buff
  with frags
- Support for freeing all frags attached to an xdp_buff
- Cleanup of TX ring buffers after transmits complete
- Slight change in definition of bnxt_sw_tx_bd since nr_frags
  and RX producer may both need to be used
- Clear out skb_shared_info at the end of the buffer

v2: Fix uninitialized variable warning in bnxt_xdp_buff_frags_free().

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff
Andy Gospodarek [Fri, 8 Apr 2022 07:59:04 +0000 (03:59 -0400)]
bnxt: adding bnxt_xdp_build_skb to build skb from multibuffer xdp_buff

Since we have an xdp_buff with frags there needs to be a way to
convert that into a valid sk_buff in the event that XDP_PASS is
the resulting operation.  This adds a new rx_skb_func when the
netdev has an MTU that prevents the packets from sitting in a
single page.

This also make sure that GRO/LRO stay disabled even when using
the aggregation ring for large buffers.

v3: Use BNXT_PAGE_MODE_BUF_SIZE for build_skb

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: add page_pool support for aggregation ring when using xdp
Andy Gospodarek [Fri, 8 Apr 2022 07:59:03 +0000 (03:59 -0400)]
bnxt: add page_pool support for aggregation ring when using xdp

If we are using aggregation rings with XDP enabled, allocate page
buffers for the aggregation rings from the page_pool.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: change receive ring space parameters
Andy Gospodarek [Fri, 8 Apr 2022 07:59:02 +0000 (03:59 -0400)]
bnxt: change receive ring space parameters

Modify ring header data split and jumbo parameters to account
for the fact that the design for XDP multibuffer puts close to
the first 4k of data in a page and the remaining portions of
the packet go in the aggregation ring.

v3: Simplified code around initial buffer size calculation

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: set xdp_buff pfmemalloc flag if needed
Andy Gospodarek [Fri, 8 Apr 2022 07:59:01 +0000 (03:59 -0400)]
bnxt: set xdp_buff pfmemalloc flag if needed

Set the pfmemaloc flag in the xdp buff so that this can be
copied to the skb if needed for an XDP_PASS action.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: adding bnxt_rx_agg_pages_xdp for aggregated xdp
Andy Gospodarek [Fri, 8 Apr 2022 07:59:00 +0000 (03:59 -0400)]
bnxt: adding bnxt_rx_agg_pages_xdp for aggregated xdp

This patch adds a new function that will read pages from the
aggregation ring and create an xdp_buff with frags based on
the entries in the aggregation ring.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: rename bnxt_rx_pages to bnxt_rx_agg_pages_skb
Andy Gospodarek [Fri, 8 Apr 2022 07:58:59 +0000 (03:58 -0400)]
bnxt: rename bnxt_rx_pages to bnxt_rx_agg_pages_skb

Clarify that this is reading buffers from the aggregation ring.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: refactor bnxt_rx_pages operate on skb_shared_info
Andy Gospodarek [Fri, 8 Apr 2022 07:58:58 +0000 (03:58 -0400)]
bnxt: refactor bnxt_rx_pages operate on skb_shared_info

Rather than operating on an sk_buff, add frags from the aggregation
ring into the frags of an skb_shared_info.  This will allow the
caller to use either an sk_buff or xdp_buff.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: add flag to denote that an xdp program is currently attached
Andy Gospodarek [Fri, 8 Apr 2022 07:58:57 +0000 (03:58 -0400)]
bnxt: add flag to denote that an xdp program is currently attached

This will be used to determine if bnxt_rx_xdp should be called
rather than calling it every time.

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agobnxt: refactor bnxt_rx_xdp to separate xdp_init_buff/xdp_prepare_buff
Andy Gospodarek [Fri, 8 Apr 2022 07:58:56 +0000 (03:58 -0400)]
bnxt: refactor bnxt_rx_xdp to separate xdp_init_buff/xdp_prepare_buff

Move initialization of xdp_buff outside of bnxt_rx_xdp to prepare
for allowing bnxt_rx_xdp to operate on multibuffer xdp_buffs.

v2: Fix uninitalized variables warning in bnxt_xdp.c.
v3: Add new define BNXT_PAGE_MODE_BUF_SIZE

Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge branch 'tls-rx-refactor-part-1'
David S. Miller [Fri, 8 Apr 2022 10:49:09 +0000 (11:49 +0100)]
Merge branch 'tls-rx-refactor-part-1'

Jakub Kicinski says:

====================
tls: rx: random refactoring part 1

TLS Rx refactoring. Part 1 of 3. A couple of features to follow.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: hw: rx: use return value of tls_device_decrypted() to carry status
Jakub Kicinski [Fri, 8 Apr 2022 03:38:23 +0000 (20:38 -0700)]
tls: hw: rx: use return value of tls_device_decrypted() to carry status

Instead of tls_device poking into internals of the message
return 1 from tls_device_decrypted() if the device handled
the decryption.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: refactor decrypt_skb_update()
Jakub Kicinski [Fri, 8 Apr 2022 03:38:22 +0000 (20:38 -0700)]
tls: rx: refactor decrypt_skb_update()

Use early return and a jump label to remove two indentation levels.
No functional changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: don't issue wake ups when data is decrypted
Jakub Kicinski [Fri, 8 Apr 2022 03:38:21 +0000 (20:38 -0700)]
tls: rx: don't issue wake ups when data is decrypted

We inform the applications that data is available when
the record is received. Decryption happens inline inside
recvmsg or splice call. Generating another wakeup inside
the decryption handler seems pointless as someone must
be actively reading the socket if we are executing this
code.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: replace 'back' with 'offset'
Jakub Kicinski [Fri, 8 Apr 2022 03:38:20 +0000 (20:38 -0700)]
tls: rx: replace 'back' with 'offset'

The padding length TLS 1.3 logic is searching for content_type from
the end of text. IMHO the code is easier to parse if we calculate
offset and decrement it rather than try to maintain positive offset
from the end of the record called "back".

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: use a define for tag length
Jakub Kicinski [Fri, 8 Apr 2022 03:38:19 +0000 (20:38 -0700)]
tls: rx: use a define for tag length

TLS 1.3 has to strip padding, and it starts out 16 bytes
from the end of the record. Make it clear this is because
of the auth tag.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: init decrypted status in tls_read_size()
Jakub Kicinski [Fri, 8 Apr 2022 03:38:18 +0000 (20:38 -0700)]
tls: rx: init decrypted status in tls_read_size()

We set the record type in tls_read_size(), can as well init
the tlm->decrypted field there.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: don't store the decryption status in socket context
Jakub Kicinski [Fri, 8 Apr 2022 03:38:17 +0000 (20:38 -0700)]
tls: rx: don't store the decryption status in socket context

Similar justification to previous change, the information
about decryption status belongs in the skb.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: don't store the record type in socket context
Jakub Kicinski [Fri, 8 Apr 2022 03:38:16 +0000 (20:38 -0700)]
tls: rx: don't store the record type in socket context

Original TLS implementation was handling one record at a time.
It stashed the type of the record inside tls context (per socket
structure) for convenience. When async crypto support was added
[1] the author had to use skb->cb to store the type per-message.

The use of skb->cb overlaps with strparser, however, so a hybrid
approach was taken where type is stored in context while parsing
(since we parse a message at a time) but once parsed its copied
to skb->cb.

Recently a workaround for sockmaps [2] exposed the previously
private struct _strp_msg and started a trend of adding user
fields directly in strparser's header. This is cleaner than
storing information about an skb in the context.

This change is not strictly necessary, but IMHO the ownership
of the context field is confusing. Information naturally
belongs to the skb.

[1] commit 94524d8fc965 ("net/tls: Add support for async decryption of tls records")
[2] commit b2c4618162ec ("bpf, sockmap: sk_skb data_end access incorrect when src_reg = dst_reg")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: drop pointless else after goto
Jakub Kicinski [Fri, 8 Apr 2022 03:38:15 +0000 (20:38 -0700)]
tls: rx: drop pointless else after goto

Pointless else branch after goto makes the code harder to refactor
down the line.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agotls: rx: jump to a more appropriate label
Jakub Kicinski [Fri, 8 Apr 2022 03:38:14 +0000 (20:38 -0700)]
tls: rx: jump to a more appropriate label

'recv_end:' checks num_async and decrypted, and is then followed
by the 'end' label. Since we know that decrypted and num_async
are 0 at the start we can jump to 'end'.

Move the init of decrypted and num_async to let the compiler
catch if I'm wrong.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Jakub Kicinski [Fri, 8 Apr 2022 06:24:23 +0000 (23:24 -0700)]
Merge git://git./linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agoMerge tag 'net-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Linus Torvalds [Fri, 8 Apr 2022 05:01:47 +0000 (19:01 -1000)]
Merge tag 'net-5.18-rc2' of git://git./linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from bpf and netfilter.

  Current release - new code bugs:

   - mctp: correct mctp_i2c_header_create result

   - eth: fungible: fix reference to __udivdi3 on 32b builds

   - eth: micrel: remove latencies support lan8814

  Previous releases - regressions:

   - bpf: resolve to prog->aux->dst_prog->type only for BPF_PROG_TYPE_EXT

   - vrf: fix packet sniffing for traffic originating from ip tunnels

   - rxrpc: fix a race in rxrpc_exit_net()

   - dsa: revert "net: dsa: stop updating master MTU from master.c"

   - eth: ice: fix MAC address setting

  Previous releases - always broken:

   - tls: fix slab-out-of-bounds bug in decrypt_internal

   - bpf: support dual-stack sockets in bpf_tcp_check_syncookie

   - xdp: fix coalescing for page_pool fragment recycling

   - ovs: fix leak of nested actions

   - eth: sfc:
      - add missing xdp queue reinitialization
      - fix using uninitialized xdp tx_queue

   - eth: ice:
      - clear default forwarding VSI during VSI release
      - fix broken IFF_ALLMULTI handling
      - synchronize_rcu() when terminating rings

   - eth: qede: confirm skb is allocated before using

   - eth: aqc111: fix out-of-bounds accesses in RX fixup

   - eth: slip: fix NPD bug in sl_tx_timeout()"

* tag 'net-5.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (61 commits)
  drivers: net: slip: fix NPD bug in sl_tx_timeout()
  bpf: Adjust bpf_tcp_check_syncookie selftest to test dual-stack sockets
  bpf: Support dual-stack sockets in bpf_tcp_check_syncookie
  myri10ge: fix an incorrect free for skb in myri10ge_sw_tso
  net: usb: aqc111: Fix out-of-bounds accesses in RX fixup
  qede: confirm skb is allocated before using
  net: ipv6mr: fix unused variable warning with CONFIG_IPV6_PIMSM_V2=n
  net: phy: mscc-miim: reject clause 45 register accesses
  net: axiemac: use a phandle to reference pcs_phy
  dt-bindings: net: add pcs-handle attribute
  net: axienet: factor out phy_node in struct axienet_local
  net: axienet: setup mdio unconditionally
  net: sfc: fix using uninitialized xdp tx_queue
  rxrpc: fix a race in rxrpc_exit_net()
  net: openvswitch: fix leak of nested actions
  net: ethernet: mv643xx: Fix over zealous checking of_get_mac_address()
  net: openvswitch: don't send internal clone attribute to the userspace.
  net: micrel: Fix KS8851 Kconfig
  ice: clear cmd_type_offset_bsz for TX rings
  ice: xsk: fix VSI state check in ice_xsk_wakeup()
  ...

2 years agonet: mpls: fix memdup.cocci warning
GONG, Ruiqi [Wed, 6 Apr 2022 11:46:29 +0000 (19:46 +0800)]
net: mpls: fix memdup.cocci warning

Simply use kmemdup instead of explicitly allocating and copying memory.

Generated by: scripts/coccinelle/api/memdup.cocci

Signed-off-by: GONG, Ruiqi <gongruiqi1@huawei.com>
Link: https://lore.kernel.org/r/20220406114629.182833-1-gongruiqi1@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2 years agohv_netvsc: Print value of invalid ID in netvsc_send_{completion,tx_complete}()
Andrea Parri (Microsoft) [Thu, 7 Apr 2022 04:40:34 +0000 (06:40 +0200)]
hv_netvsc: Print value of invalid ID in netvsc_send_{completion,tx_complete}()

That being useful for debugging purposes.

Notice that the packet descriptor is in "private" guest memory, so
that Hyper-V can not tamper with it.

While at it, remove two unnecessary u64-casts.

Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>