platform/kernel/linux-rpi.git
5 years agoselftests/bpf: Teach test_progs to cd into subdir
Andrii Nakryiko [Wed, 16 Oct 2019 06:00:45 +0000 (23:00 -0700)]
selftests/bpf: Teach test_progs to cd into subdir

We are building a bunch of "flavors" of test_progs, e.g., w/ alu32 flag
for Clang when building BPF object. test_progs setup is relying on
having all the BPF object files and extra resources to be available in
current working directory, though. But we actually build all these files
into a separate sub-directory. Next set of patches establishes
convention of naming "flavored" test_progs (and test runner binaries in
general) as test_progs-flavor (e.g., test_progs-alu32), for each such
extra flavor. This patch teaches test_progs binary to automatically
detect its own extra flavor based on its argv[0], and if present, to
change current directory to a flavor-specific subdirectory.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-2-andriin@fb.com
5 years agoselftests/bpf: Restore the netns after flow dissector reattach test
Jakub Sitnicki [Thu, 17 Oct 2019 08:37:52 +0000 (10:37 +0200)]
selftests/bpf: Restore the netns after flow dissector reattach test

flow_dissector_reattach test changes the netns we run in but does not
restore it to the one we started in when finished. This interferes with
tests that run after it. Fix it by restoring the netns when done.

Fixes: f97eea1756f3 ("selftests/bpf: Check that flow dissector can be re-attached")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Reported-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191017083752.30999-1-jakub@cloudflare.com
5 years agoMerge branch 'bpf-btf-trace'
Daniel Borkmann [Thu, 17 Oct 2019 14:44:36 +0000 (16:44 +0200)]
Merge branch 'bpf-btf-trace'

Alexei Starovoitov says:

====================
v2->v3:
- while trying to adopt btf-based tracing in production service realized
  that disabling bpf_probe_read() was premature. The real tracing program
  needs to see much more than this type safe tracking can provide.
  With these patches the verifier will be able to see that skb->data
  is a pointer to 'u8 *', but it cannot possibly know how many bytes
  of it is readable. Hence bpf_probe_read() is necessary to do basic
  packet reading from tracing program. Some helper can be introduced to
  solve this particular problem, but there are other similar structures.
  Another issue is bitfield reading. The support for bitfields
  is coming to llvm. libbpf will be supporting it eventually as well,
  but there will be corner cases where bpf_probe_read() is necessary.
  The long term goal is still the same: get rid of probe_read eventually.
- fixed build issue with clang reported by Nathan Chancellor.
- addressed a ton of comments from Andrii.
  bitfields and arrays are explicitly unsupported in btf-based tracking.
  This will be improved in the future.
  Right now the verifier is more strict than necessary.
  In some cases it can fall back to 'scalar' instead of rejecting
  the program, but rejection today allows to make better decisions
  in the future.
- adjusted testcase to demo bitfield and skb->data reading.

v1->v2:
- addressed feedback from Andrii and Eric. Thanks a lot for review!
- added missing check at raw_tp attach time.
- Andrii noticed that expected_attach_type cannot be reused.
  Had to introduce new field to bpf_attr.
- cleaned up logging nicely by introducing bpf_log() helper.
- rebased.

Revolutionize bpf tracing and bpf C programming.
C language allows any pointer to be typecasted to any other pointer
or convert integer to a pointer.
Though bpf verifier is operating at assembly level it has strict type
checking for fixed number of types.
Known types are defined in 'enum bpf_reg_type'.
For example:
PTR_TO_FLOW_KEYS is a pointer to 'struct bpf_flow_keys'
PTR_TO_SOCKET is a pointer to 'struct bpf_sock',
and so on.

When it comes to bpf tracing there are no types to track.
bpf+kprobe receives 'struct pt_regs' as input.
bpf+raw_tracepoint receives raw kernel arguments as an array of u64 values.
It was up to bpf program to interpret these integers.
Typical tracing program looks like:
int bpf_prog(struct pt_regs *ctx)
{
    struct net_device *dev;
    struct sk_buff *skb;
    int ifindex;

    skb = (struct sk_buff *) ctx->di;
    bpf_probe_read(&dev, sizeof(dev), &skb->dev);
    bpf_probe_read(&ifindex, sizeof(ifindex), &dev->ifindex);
}
Addressing mistakes will not be caught by C compiler or by the verifier.
The program above could have typecasted ctx->si to skb and page faulted
on every bpf_probe_read().
bpf_probe_read() allows reading any address and suppresses page faults.
Typical program has hundreds of bpf_probe_read() calls to walk
kernel data structures.
Not only tracing program would be slow, but there was always a risk
that bpf_probe_read() would read mmio region of memory and cause
unpredictable hw behavior.

With introduction of Compile Once Run Everywhere technology in libbpf
and in LLVM and BPF Type Format (BTF) the verifier is finally ready
for the next step in program verification.
Now it can use in-kernel BTF to type check bpf assembly code.

Equivalent program will look like:
struct trace_kfree_skb {
    struct sk_buff *skb;
    void *location;
};
SEC("raw_tracepoint/kfree_skb")
int trace_kfree_skb(struct trace_kfree_skb* ctx)
{
    struct sk_buff *skb = ctx->skb;
    struct net_device *dev;
    int ifindex;

    __builtin_preserve_access_index(({
        dev = skb->dev;
        ifindex = dev->ifindex;
    }));
}

These patches teach bpf verifier to recognize kfree_skb's first argument
as 'struct sk_buff *' because this is what kernel C code is doing.
The bpf program cannot 'cheat' and say that the first argument
to kfree_skb raw_tracepoint is some other type.
The verifier will catch such type mismatch between bpf program
assumption of kernel code and the actual type in the kernel.

Furthermore skb->dev access is type tracked as well.
The verifier can see which field of skb is being read
in bpf assembly. It will match offset to type.
If bpf program has code:
struct net_device *dev = (void *)skb->len;
C compiler will not complain and generate bpf assembly code,
but the verifier will recognize that integer 'len' field
is being accessed at offsetof(struct sk_buff, len) and will reject
further dereference of 'dev' variable because it contains
integer value instead of a pointer.

Such sophisticated type tracking allows calling networking
bpf helpers from tracing programs.
This patchset allows calling bpf_skb_event_output() that dumps
skb data into perf ring buffer.
It greatly improves observability.
Now users can not only see packet lenth of the skb
about to be freed in kfree_skb() kernel function, but can
dump it to user space via perf ring buffer using bpf helper
that was previously available only to TC and socket filters.
See patch 10 for full example.

The end result is safer and faster bpf tracing.
Safer - because type safe direct load can be used most of the time
instead of bpf_probe_read().
Faster - because direct loads are used to walk kernel data structures
instead of bpf_probe_read() calls.
Note that such loads can page fault and are supported by
hidden bpf_probe_read() in interpreter and via exception table
if program is JITed.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agoselftests/bpf: Add kfree_skb raw_tp test
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:05 +0000 (20:25 -0700)]
selftests/bpf: Add kfree_skb raw_tp test

Load basic cls_bpf program.
Load raw_tracepoint program and attach to kfree_skb raw tracepoint.
Trigger cls_bpf via prog_test_run.
At the end of test_run kernel will call kfree_skb
which will trigger trace_kfree_skb tracepoint.
Which will call our raw_tracepoint program.
Which will take that skb and will dump it into perf ring buffer.
Check that user space received correct packet.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-12-ast@kernel.org
5 years agobpf: Check types of arguments passed into helpers
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:04 +0000 (20:25 -0700)]
bpf: Check types of arguments passed into helpers

Introduce new helper that reuses existing skb perf_event output
implementation, but can be called from raw_tracepoint programs
that receive 'struct sk_buff *' as tracepoint argument or
can walk other kernel data structures to skb pointer.

In order to do that teach verifier to resolve true C types
of bpf helpers into in-kernel BTF ids.
The type of kernel pointer passed by raw tracepoint into bpf
program will be tracked by the verifier all the way until
it's passed into helper function.
For example:
kfree_skb() kernel function calls trace_kfree_skb(skb, loc);
bpf programs receives that skb pointer and may eventually
pass it into bpf_skb_output() bpf helper which in-kernel is
implemented via bpf_skb_event_output() kernel function.
Its first argument in the kernel is 'struct sk_buff *'.
The verifier makes sure that types match all the way.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-11-ast@kernel.org
5 years agobpf: Add support for BTF pointers to x86 JIT
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:03 +0000 (20:25 -0700)]
bpf: Add support for BTF pointers to x86 JIT

Pointer to BTF object is a pointer to kernel object or NULL.
Such pointers can only be used by BPF_LDX instructions.
The verifier changed their opcode from LDX|MEM|size
to LDX|PROBE_MEM|size to make JITing easier.
The number of entries in extable is the number of BPF_LDX insns
that access kernel memory via "pointer to BTF type".
Only these load instructions can fault.
Since x86 extable is relative it has to be allocated in the same
memory region as JITed code.
Allocate it prior to last pass of JITing and let the last pass populate it.
Pointer to extable in bpf_prog_aux is necessary to make page fault
handling fast.
Page fault handling is done in two steps:
1. bpf_prog_kallsyms_find() finds BPF program that page faulted.
   It's done by walking rb tree.
2. then extable for given bpf program is binary searched.
This process is similar to how page faulting is done for kernel modules.
The exception handler skips over faulting x86 instruction and
initializes destination register with zero. This mimics exact
behavior of bpf_probe_read (when probe_kernel_read faults dest is zeroed).

JITs for other architectures can add support in similar way.
Until then they will reject unknown opcode and fallback to interpreter.

Since extable should be aligned and placed near JITed code
make bpf_jit_binary_alloc() return 4 byte aligned image offset,
so that extable aligning formula in bpf_int_jit_compile() doesn't need
to rely on internal implementation of bpf_jit_binary_alloc().
On x86 gcc defaults to 16-byte alignment for regular kernel functions
due to better performance. JITed code may be aligned to 16 in the future,
but it will use 4 in the meantime.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-10-ast@kernel.org
5 years agobpf: Add support for BTF pointers to interpreter
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:02 +0000 (20:25 -0700)]
bpf: Add support for BTF pointers to interpreter

Pointer to BTF object is a pointer to kernel object or NULL.
The memory access in the interpreter has to be done via probe_kernel_read
to avoid page faults.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-9-ast@kernel.org
5 years agobpf: Attach raw_tp program with BTF via type name
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:01 +0000 (20:25 -0700)]
bpf: Attach raw_tp program with BTF via type name

BTF type id specified at program load time has all
necessary information to attach that program to raw tracepoint.
Use kernel type name to find raw tracepoint.

Add missing CHECK_ATTR() condition.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-8-ast@kernel.org
5 years agobpf: Implement accurate raw_tp context access via BTF
Alexei Starovoitov [Wed, 16 Oct 2019 03:25:00 +0000 (20:25 -0700)]
bpf: Implement accurate raw_tp context access via BTF

libbpf analyzes bpf C program, searches in-kernel BTF for given type name
and stores it into expected_attach_type.
The kernel verifier expects this btf_id to point to something like:
typedef void (*btf_trace_kfree_skb)(void *, struct sk_buff *skb, void *loc);
which represents signature of raw_tracepoint "kfree_skb".

Then btf_ctx_access() matches ctx+0 access in bpf program with 'skb'
and 'ctx+8' access with 'loc' arguments of "kfree_skb" tracepoint.
In first case it passes btf_id of 'struct sk_buff *' back to the verifier core
and 'void *' in second case.

Then the verifier tracks PTR_TO_BTF_ID as any other pointer type.
Like PTR_TO_SOCKET points to 'struct bpf_sock',
PTR_TO_TCP_SOCK points to 'struct bpf_tcp_sock', and so on.
PTR_TO_BTF_ID points to in-kernel structs.
If 1234 is btf_id of 'struct sk_buff' in vmlinux's BTF
then PTR_TO_BTF_ID#1234 points to one of in kernel skbs.

When PTR_TO_BTF_ID#1234 is dereferenced (like r2 = *(u64 *)r1 + 32)
the btf_struct_access() checks which field of 'struct sk_buff' is
at offset 32. Checks that size of access matches type definition
of the field and continues to track the dereferenced type.
If that field was a pointer to 'struct net_device' the r2's type
will be PTR_TO_BTF_ID#456. Where 456 is btf_id of 'struct net_device'
in vmlinux's BTF.

Such verifier analysis prevents "cheating" in BPF C program.
The program cannot cast arbitrary pointer to 'struct sk_buff *'
and access it. C compiler would allow type cast, of course,
but the verifier will notice type mismatch based on BPF assembly
and in-kernel BTF.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-7-ast@kernel.org
5 years agolibbpf: Auto-detect btf_id of BTF-based raw_tracepoints
Alexei Starovoitov [Wed, 16 Oct 2019 03:24:59 +0000 (20:24 -0700)]
libbpf: Auto-detect btf_id of BTF-based raw_tracepoints

It's a responsiblity of bpf program author to annotate the program
with SEC("tp_btf/name") where "name" is a valid raw tracepoint.
The libbpf will try to find "name" in vmlinux BTF and error out
in case vmlinux BTF is not available or "name" is not found.
If "name" is indeed a valid raw tracepoint then in-kernel BTF
will have "btf_trace_##name" typedef that points to function
prototype of that raw tracepoint. BTF description captures
exact argument the kernel C code is passing into raw tracepoint.
The kernel verifier will check the types while loading bpf program.

libbpf keeps BTF type id in expected_attach_type, but since
kernel ignores this attribute for tracing programs copy it
into attach_btf_id attribute before loading.

Later the kernel will use prog->attach_btf_id to select raw tracepoint
during bpf_raw_tracepoint_open syscall command.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
5 years agobpf: Add attach_btf_id attribute to program load
Alexei Starovoitov [Wed, 16 Oct 2019 03:24:58 +0000 (20:24 -0700)]
bpf: Add attach_btf_id attribute to program load

Add attach_btf_id attribute to prog_load command.
It's similar to existing expected_attach_type attribute which is
used in several cgroup based program types.
Unfortunately expected_attach_type is ignored for
tracing programs and cannot be reused for new purpose.
Hence introduce attach_btf_id to verify bpf programs against
given in-kernel BTF type id at load time.
It is strictly checked to be valid for raw_tp programs only.
In a later patches it will become:
btf_id == 0 semantics of existing raw_tp progs.
btd_id > 0 raw_tp with BTF and additional type safety.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-5-ast@kernel.org
5 years agobpf: Process in-kernel BTF
Alexei Starovoitov [Wed, 16 Oct 2019 03:24:57 +0000 (20:24 -0700)]
bpf: Process in-kernel BTF

If in-kernel BTF exists parse it and prepare 'struct btf *btf_vmlinux'
for further use by the verifier.
In-kernel BTF is trusted just like kallsyms and other build artifacts
embedded into vmlinux.
Yet run this BTF image through BTF verifier to make sure
that it is valid and it wasn't mangled during the build.
If in-kernel BTF is incorrect it means either gcc or pahole or kernel
are buggy. In such case disallow loading BPF programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-4-ast@kernel.org
5 years agobpf: Add typecast to bpf helpers to help BTF generation
Alexei Starovoitov [Wed, 16 Oct 2019 03:24:56 +0000 (20:24 -0700)]
bpf: Add typecast to bpf helpers to help BTF generation

When pahole converts dwarf to btf it emits only used types.
Wrap existing bpf helper functions into typedef and use it in
typecast to make gcc emits this type into dwarf.
Then pahole will convert it to btf.
The "btf_#name_of_helper" types will be used to figure out
types of arguments of bpf helpers.
The generated code before and after is the same.
Only dwarf and btf sections are different.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-3-ast@kernel.org
5 years agobpf: Add typecast to raw_tracepoints to help BTF generation
Alexei Starovoitov [Wed, 16 Oct 2019 03:24:55 +0000 (20:24 -0700)]
bpf: Add typecast to raw_tracepoints to help BTF generation

When pahole converts dwarf to btf it emits only used types.
Wrap existing __bpf_trace_##template() function into
btf_trace_##template typedef and use it in type cast to
make gcc emits this type into dwarf. Then pahole will convert it to btf.
The "btf_trace_" prefix will be used to identify BTF enabled raw tracepoints.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191016032505.2089704-2-ast@kernel.org
5 years agobpf/stackmap: Fix deadlock with rq_lock in bpf_get_stack()
Song Liu [Mon, 14 Oct 2019 17:12:23 +0000 (10:12 -0700)]
bpf/stackmap: Fix deadlock with rq_lock in bpf_get_stack()

bpf stackmap with build-id lookup (BPF_F_STACK_BUILD_ID) can trigger A-A
deadlock on rq_lock():

rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[...]
Call Trace:
 try_to_wake_up+0x1ad/0x590
 wake_up_q+0x54/0x80
 rwsem_wake+0x8a/0xb0
 bpf_get_stack+0x13c/0x150
 bpf_prog_fbdaf42eded9fe46_on_event+0x5e3/0x1000
 bpf_overflow_handler+0x60/0x100
 __perf_event_overflow+0x4f/0xf0
 perf_swevent_overflow+0x99/0xc0
 ___perf_sw_event+0xe7/0x120
 __schedule+0x47d/0x620
 schedule+0x29/0x90
 futex_wait_queue_me+0xb9/0x110
 futex_wait+0x139/0x230
 do_futex+0x2ac/0xa50
 __x64_sys_futex+0x13c/0x180
 do_syscall_64+0x42/0x100
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

This can be reproduced by:
1. Start a multi-thread program that does parallel mmap() and malloc();
2. taskset the program to 2 CPUs;
3. Attach bpf program to trace_sched_switch and gather stackmap with
   build-id, e.g. with trace.py from bcc tools:
   trace.py -U -p <pid> -s <some-bin,some-lib> t:sched:sched_switch

A sample reproducer is attached at the end.

This could also trigger deadlock with other locks that are nested with
rq_lock.

Fix this by checking whether irqs are disabled. Since rq_lock and all
other nested locks are irq safe, it is safe to do up_read() when irqs are
not disable. If the irqs are disabled, postpone up_read() in irq_work.

Fixes: 615755a77b24 ("bpf: extend stackmap to save binary_build_id+offset instead of address")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191014171223.357174-1-songliubraving@fb.com
Reproducer:
============================ 8< ============================

char *filename;

void *worker(void *p)
{
        void *ptr;
        int fd;
        char *pptr;

        fd = open(filename, O_RDONLY);
        if (fd < 0)
                return NULL;
        while (1) {
                struct timespec ts = {0, 1000 + rand() % 2000};

                ptr = mmap(NULL, 4096 * 64, PROT_READ, MAP_PRIVATE, fd, 0);
                usleep(1);
                if (ptr == MAP_FAILED) {
                        printf("failed to mmap\n");
                        break;
                }
                munmap(ptr, 4096 * 64);
                usleep(1);
                pptr = malloc(1);
                usleep(1);
                pptr[0] = 1;
                usleep(1);
                free(pptr);
                usleep(1);
                nanosleep(&ts, NULL);
        }
        close(fd);
        return NULL;
}

int main(int argc, char *argv[])
{
        void *ptr;
        int i;
        pthread_t threads[THREAD_COUNT];

        if (argc < 2)
                return 0;

        filename = argv[1];

        for (i = 0; i < THREAD_COUNT; i++) {
                if (pthread_create(threads + i, NULL, worker, NULL)) {
                        fprintf(stderr, "Error creating thread\n");
                        return 0;
                }
        }

        for (i = 0; i < THREAD_COUNT; i++)
                pthread_join(threads[i], NULL);
        return 0;
}
============================ 8< ============================

5 years agoscripts/bpf: Emit an #error directive known types list needs updating
Jakub Sitnicki [Wed, 16 Oct 2019 08:58:11 +0000 (10:58 +0200)]
scripts/bpf: Emit an #error directive known types list needs updating

Make the compiler report a clear error when bpf_helpers_doc.py needs
updating rather than rely on the fact that Clang fails to compile
English:

../../../lib/bpf/bpf_helper_defs.h:2707:1: error: unknown type name 'Unrecognized'
Unrecognized type 'struct bpf_inet_lookup', please add it to known types!

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191016085811.11700-1-jakub@cloudflare.com
5 years agoselftests: bpf: Don't try to read files without read permission
Jiri Pirko [Tue, 15 Oct 2019 10:00:56 +0000 (12:00 +0200)]
selftests: bpf: Don't try to read files without read permission

Recently couple of files that are write only were added to netdevsim
debugfs. Don't read these files and avoid error.

Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agoselftests: bpf: Add selftest for __sk_buff tstamp
Stanislav Fomichev [Tue, 15 Oct 2019 18:31:25 +0000 (11:31 -0700)]
selftests: bpf: Add selftest for __sk_buff tstamp

Make sure BPF_PROG_TEST_RUN accepts tstamp and exports any
modifications that BPF program does.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191015183125.124413-2-sdf@google.com
5 years agobpf: Allow __sk_buff tstamp in BPF_PROG_TEST_RUN
Stanislav Fomichev [Tue, 15 Oct 2019 18:31:24 +0000 (11:31 -0700)]
bpf: Allow __sk_buff tstamp in BPF_PROG_TEST_RUN

It's useful for implementing EDT related tests (set tstamp, run the
test, see how the tstamp is changed or observe some other parameter).

Note that bpf_ktime_get_ns() helper is using monotonic clock, so for
the BPF programs that compare tstamp against it, tstamp should be
derived from clock_gettime(CLOCK_MONOTONIC, ...).

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191015183125.124413-1-sdf@google.com
5 years agoMerge branch 'libbpf-field-existence'
Alexei Starovoitov [Tue, 15 Oct 2019 23:06:05 +0000 (16:06 -0700)]
Merge branch 'libbpf-field-existence'

Andrii Nakryiko says:

====================
This patch set generalizes libbpf's CO-RE relocation support. In addition to
existing field's byte offset relocation, libbpf now supports field existence
relocations, which are emitted by Clang when using
__builtin_preserve_field_info(<field>, BPF_FIELD_EXISTS). A convenience
bpf_core_field_exists() macro is added to bpf_core_read.h BPF-side header,
along the bpf_field_info_kind enum containing currently supported types of
field information libbpf supports. This list will grow as libbpf gains support
for other relo kinds.

This patch set upgrades the format of .BTF.ext's relocation record to match
latest Clang's format (12 -> 16 bytes). This is not a breaking change, as the
previous format hasn't been released yet as part of official Clang version
release.

v1->v2:
- unify bpf_field_info_kind enum and naming changes (Alexei);
- added bpf_core_field_exists() to bpf_core_read.h.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
5 years agoselftests/bpf: Add field existence CO-RE relocs tests
Andrii Nakryiko [Tue, 15 Oct 2019 18:28:49 +0000 (11:28 -0700)]
selftests/bpf: Add field existence CO-RE relocs tests

Add a bunch of tests validating CO-RE is handling field existence
relocation. Relaxed CO-RE relocation mode is activated for these new
tests to prevent libbpf from rejecting BPF object for no-match
relocation, even though test BPF program is not going to use that
relocation, if field is missing.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191015182849.3922287-6-andriin@fb.com
5 years agolibbpf: Add BPF-side definitions of supported field relocation kinds
Andrii Nakryiko [Tue, 15 Oct 2019 18:28:48 +0000 (11:28 -0700)]
libbpf: Add BPF-side definitions of supported field relocation kinds

Add enum definition for Clang's __builtin_preserve_field_info()
second argument (info_kind). Currently only byte offset and existence
are supported. Corresponding Clang changes introducing this built-in can
be found at [0]

  [0] https://reviews.llvm.org/D67980

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191015182849.3922287-5-andriin@fb.com
5 years agolibbpf: Add support for field existance CO-RE relocation
Andrii Nakryiko [Tue, 15 Oct 2019 18:28:47 +0000 (11:28 -0700)]
libbpf: Add support for field existance CO-RE relocation

Add support for BPF_FRK_EXISTS relocation kind to detect existence of
captured field in a destination BTF, allowing conditional logic to
handle incompatible differences between kernels.

Also introduce opt-in relaxed CO-RE relocation handling option, which
makes libbpf emit warning for failed relocations, but proceed with other
relocations. Instruction, for which relocation failed, is patched with
(u32)-1 value.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191015182849.3922287-4-andriin@fb.com
5 years agolibbpf: Refactor bpf_object__open APIs to use common opts
Andrii Nakryiko [Tue, 15 Oct 2019 18:28:46 +0000 (11:28 -0700)]
libbpf: Refactor bpf_object__open APIs to use common opts

Refactor all the various bpf_object__open variations to ultimately
specify common bpf_object_open_opts struct. This makes it easy to keep
extending this common struct w/ extra parameters without having to
update all the legacy APIs.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191015182849.3922287-3-andriin@fb.com
5 years agolibbpf: Update BTF reloc support to latest Clang format
Andrii Nakryiko [Tue, 15 Oct 2019 18:28:45 +0000 (11:28 -0700)]
libbpf: Update BTF reloc support to latest Clang format

BTF offset reloc was generalized in recent Clang into field relocation,
capturing extra u32 field, specifying what aspect of captured field
needs to be relocated. This changes .BTF.ext's record size for this
relocation from 12 bytes to 16 bytes. Given these format changes
happened in Clang before official released version, it's ok to not
support outdated 12-byte record size w/o breaking ABI.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191015182849.3922287-2-andriin@fb.com
5 years agonet: Update address for vrf and l3mdev in MAINTAINERS
David Ahern [Sat, 12 Oct 2019 02:43:03 +0000 (20:43 -0600)]
net: Update address for vrf and l3mdev in MAINTAINERS

Use my kernel.org address for all entries in MAINTAINERS.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'Scatter-gather-SPI-for-SJA1105-DSA'
David S. Miller [Tue, 15 Oct 2019 17:16:57 +0000 (13:16 -0400)]
Merge branch 'Scatter-gather-SPI-for-SJA1105-DSA'

Vladimir Oltean says:

====================
Scatter/gather SPI for SJA1105 DSA

This is a small series that reduces the stack memory usage for the
sja1105 driver.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Switch to scatter/gather API for SPI
Vladimir Oltean [Fri, 11 Oct 2019 22:31:15 +0000 (01:31 +0300)]
net: dsa: sja1105: Switch to scatter/gather API for SPI

This reworks the SPI transfer implementation to make use of more of the
SPI core features. The main benefit is to avoid the memcpy in
sja1105_xfer_buf().

The memcpy was only needed because the function was transferring a
single buffer at a time. So it needed to copy the caller-provided buffer
at buf + 4, to store the SPI message header in the "headroom" area.

But the SPI core supports scatter-gather messages, comprised of multiple
transfers. We can actually use those to break apart every SPI message
into 2 transfers: one for the header and one for the actual payload.

To keep the behavior the same regarding the chip select signal, it is
necessary to tell the SPI core to de-assert the chip select after each
chunk. This was not needed before, because each spi_message contained
only 1 single transfer.

The meaning of the per-transfer cs_change=1 is:

- If the transfer is the last one of the message, keep CS asserted
- Otherwise, deassert CS

We need to deassert CS in the "otherwise" case, which was implicit
before.

Avoiding the memcpy creates yet another opportunity. The device can't
process more than 256 bytes of SPI payload at a time, so the
sja1105_xfer_long_buf() function used to exist, to split the larger
caller buffer into chunks.

But these chunks couldn't be used as scatter/gather buffers for
spi_message until now, because of that memcpy (we would have needed more
memory for each chunk). So we can now remove the sja1105_xfer_long_buf()
function and have a single implementation for long and short buffers.

Another benefit is lower usage of stack memory. Previously we had to
store 2 SPI buffers for each chunk. Due to the elimination of the
memcpy, we can now send pointers to the actual chunks from the
caller-supplied buffer to the SPI core.

Since the patch merges two functions into a rewritten implementation,
the function prototype was also changed, mainly for cosmetic consistency
with the structures used within it.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Move sja1105_spi_transfer into sja1105_xfer
Vladimir Oltean [Fri, 11 Oct 2019 22:31:14 +0000 (01:31 +0300)]
net: dsa: sja1105: Move sja1105_spi_transfer into sja1105_xfer

This is a cosmetic patch that reduces some boilerplate in the SPI
interaction of the driver.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: b44: remove redundant assignment to variable reg
Colin Ian King [Fri, 11 Oct 2019 17:22:32 +0000 (18:22 +0100)]
net: b44: remove redundant assignment to variable reg

The variable reg is being assigned a value that is never read
and is being re-assigned in the following for-loop. The
assignment is redundant and hence can be removed.

Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'PTP-driver-refactoring-for-SJA1105-DSA'
David S. Miller [Mon, 14 Oct 2019 23:45:41 +0000 (16:45 -0700)]
Merge branch 'PTP-driver-refactoring-for-SJA1105-DSA'

Vladimir Oltean says:

====================
PTP driver refactoring for SJA1105 DSA

This series creates a better separation between the driver core and the
PTP portion. Therefore, users who are not interested in PTP can get a
simpler and smaller driver by compiling it out.

This is in preparation for further patches: SPI transfer timestamping,
synchronizing the hardware clock (as opposed to keeping it
free-running), PPS input/output, etc.
====================

Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Change the PTP command access pattern
Vladimir Oltean [Fri, 11 Oct 2019 23:18:16 +0000 (02:18 +0300)]
net: dsa: sja1105: Change the PTP command access pattern

The PTP command register contains enable bits for:
- Putting the 64-bit PTPCLKVAL register in add/subtract or write mode
- Taking timestamps off of the corrected vs free-running clock
- Starting/stopping the TTEthernet scheduling
- Starting/stopping PPS output
- Resetting the switch

When a command needs to be issued (e.g. "change the PTPCLKVAL from write
mode to add/subtract mode"), one cannot simply write to the command
register setting the PTPCLKADD bit to 1, because that would zeroize the
other settings. One also cannot do a read-modify-write (that would be
too easy for this hardware) because not all bits of the command register
are readable over SPI.

So this leaves us with the only option of keeping the value of the PTP
command register in the driver, and operating on that.

Actually there are 2 types of PTP operations now:
- Operations that modify the cached PTP command. These operate on
  ptp_data->cmd as a pointer.
- Operations that apply all previously cached PTP settings, but don't
  otherwise cache what they did themselves. The sja1105_ptp_reset
  function is such an example. It copies the ptp_data->cmd on stack
  before modifying and writing it to SPI.

This practically means that struct sja1105_ptp_cmd is no longer an
implementation detail, since it needs to be stored in full into struct
sja1105_ptp_data, and hence in struct sja1105_private. So the (*ptp_cmd)
function prototype can change and take struct sja1105_ptp_cmd as second
argument now.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Move PTP data to its own private structure
Vladimir Oltean [Fri, 11 Oct 2019 23:18:15 +0000 (02:18 +0300)]
net: dsa: sja1105: Move PTP data to its own private structure

This is a non-functional change with 2 goals (both for the case when
CONFIG_NET_DSA_SJA1105_PTP is not enabled):

- Reduce the size of the sja1105_private structure.
- Make the PTP code more self-contained.

Leaving priv->ptp_data.lock to be initialized in sja1105_main.c is not a
leftover: it will be used in a future patch "net: dsa: sja1105: Restore
PTP time after switch reset".

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Make all public PTP functions take dsa_switch as argument
Vladimir Oltean [Fri, 11 Oct 2019 23:18:14 +0000 (02:18 +0300)]
net: dsa: sja1105: Make all public PTP functions take dsa_switch as argument

The new rule (as already started for sja1105_tas.h) is for functions of
optional driver components (ones which may be disabled via Kconfig - PTP
and TAS) to take struct dsa_switch *ds instead of struct sja1105_private
*priv as first argument.

This is so that forward-declarations of struct sja1105_private can be
avoided.

So make sja1105_ptp.h the second user of this rule.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: dsa: sja1105: Get rid of global declaration of struct ptp_clock_info
Vladimir Oltean [Fri, 11 Oct 2019 23:18:13 +0000 (02:18 +0300)]
net: dsa: sja1105: Get rid of global declaration of struct ptp_clock_info

We need priv->ptp_caps to hold a structure and not just a pointer,
because we use container_of in the various PTP callbacks.

Therefore, the sja1105_ptp_caps structure declared in the global memory
of the driver serves no further purpose after copying it into
priv->ptp_caps.

So just populate priv->ptp_caps with the needed operations and remove
sja1105_ptp_caps.

Signed-off-by: Vladimir Oltean <olteanv@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
David S. Miller [Mon, 14 Oct 2019 19:17:21 +0000 (12:17 -0700)]
Merge git://git./linux/kernel/git/bpf/bpf-next

Alexei Starovoitov says:

====================
pull-request: bpf-next 2019-10-14

The following pull-request contains BPF updates for your *net-next* tree.

12 days of development and
85 files changed, 1889 insertions(+), 1020 deletions(-)

The main changes are:

1) auto-generation of bpf_helper_defs.h, from Andrii.

2) split of bpf_helpers.h into bpf_{helpers, helper_defs, endian, tracing}.h
   and move into libbpf, from Andrii.

3) Track contents of read-only maps as scalars in the verifier, from Andrii.

4) small x86 JIT optimization, from Daniel.

5) cross compilation support, from Ivan.

6) bpf flow_dissector enhancements, from Jakub and Stanislav.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge tag 'mac80211-next-for-net-next-2019-10-11' of git://git.kernel.org/pub/scm...
David S. Miller [Sun, 13 Oct 2019 18:29:07 +0000 (11:29 -0700)]
Merge tag 'mac80211-next-for-net-next-2019-10-11' of git://git./linux/kernel/git/jberg/mac80211-next

Johannes Berg says:

====================
A few more small things, nothing really stands out:
 * minstrel improvements from Felix
 * a TX aggregation simplification
 * some additional capabilities for hwsim
 * minor cleanups & docs updates
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agogenetlink: do not parse attributes for families with zero maxattr
Michal Kubecek [Fri, 11 Oct 2019 07:40:09 +0000 (09:40 +0200)]
genetlink: do not parse attributes for families with zero maxattr

Commit c10e6cf85e7d ("net: genetlink: push attrbuf allocation and parsing
to a separate function") moved attribute buffer allocation and attribute
parsing from genl_family_rcv_msg_doit() into a separate function
genl_family_rcv_msg_attrs_parse() which, unlike the previous code, calls
__nlmsg_parse() even if family->maxattr is 0 (i.e. the family does its own
parsing). The parser error is ignored and does not propagate out of
genl_family_rcv_msg_attrs_parse() but an error message ("Unknown attribute
type") is set in extack and if further processing generates no error or
warning, it stays there and is interpreted as a warning by userspace.

Dumpit requests are not affected as genl_family_rcv_msg_dumpit() bypasses
the call of genl_family_rcv_msg_attrs_parse() if family->maxattr is zero.
Move this logic inside genl_family_rcv_msg_attrs_parse() so that we don't
have to handle it in each caller.

v3: put the check inside genl_family_rcv_msg_attrs_parse()
v2: adjust also argument of genl_family_rcv_msg_attrs_free()

Fixes: c10e6cf85e7d ("net: genetlink: push attrbuf allocation and parsing to a separate function")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agotcp: improve recv_skip_hint for tcp_zerocopy_receive
Soheil Hassas Yeganeh [Fri, 11 Oct 2019 03:27:02 +0000 (23:27 -0400)]
tcp: improve recv_skip_hint for tcp_zerocopy_receive

tcp_zerocopy_receive() rounds down the zc->length a multiple of
PAGE_SIZE. This results in two issues:
- tcp_zerocopy_receive sets recv_skip_hint to the length of the
  receive queue if the zc->length input is smaller than the
  PAGE_SIZE, even though the data in receive queue could be
  zerocopied.
- tcp_zerocopy_receive would set recv_skip_hint of 0, in cases
  where we have a little bit of data after the perfectly-sized
  packets.

To fix these issues, do not store the rounded down value in
zc->length. Round down the length passed to zap_page_range(),
and return min(inq, zc->length) when the zap_range is 0.

Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'selftests-bpf-Makefile-cleanup'
Alexei Starovoitov [Sat, 12 Oct 2019 23:15:10 +0000 (16:15 -0700)]
Merge branch 'selftests-bpf-Makefile-cleanup'

Andrii Nakryiko says:

====================
Patch #1 enforces libbpf build to have bpf_helper_defs.h ready before test BPF
programs are built.
Patch #2 drops obsolete BTF/pahole detection logic from Makefile.

v1->v2:
- drop CPU and PROBE (Martin).
====================

Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
5 years agoselftests/bpf: Remove obsolete pahole/BTF support detection
Andrii Nakryiko [Fri, 11 Oct 2019 22:01:46 +0000 (15:01 -0700)]
selftests/bpf: Remove obsolete pahole/BTF support detection

Given lots of selftests won't work without recent enough Clang/LLVM that
fully supports BTF, there is no point in maintaining outdated BTF
support detection and fall-back to pahole logic. Just assume we have
everything we need.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011220146.3798961-3-andriin@fb.com
5 years agoselftests/bpf: Enforce libbpf build before BPF programs are built
Andrii Nakryiko [Fri, 11 Oct 2019 22:01:45 +0000 (15:01 -0700)]
selftests/bpf: Enforce libbpf build before BPF programs are built

Given BPF programs rely on libbpf's bpf_helper_defs.h, which is
auto-generated during libbpf build, libbpf build has to happen before
we attempt progs/*.c build. Enforce it as order-only dependency.

Fixes: 24f25763d6de ("libbpf: auto-generate list of BPF helper definitions")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011220146.3798961-2-andriin@fb.com
5 years agoMerge branch 'samples-cross-compile'
Alexei Starovoitov [Sat, 12 Oct 2019 23:09:00 +0000 (16:09 -0700)]
Merge branch 'samples-cross-compile'

Ivan Khoronzhuk says:

====================
This series contains mainly fixes/improvements for cross-compilation
but not only, tested for arm, arm64, and intended for any arch.
Also verified on native build (not cross compilation) for x86_64
and arm, arm64.

Initial RFC link:
https://lkml.org/lkml/2019/8/29/1665

Prev. version:
https://lkml.org/lkml/2019/10/9/1045

Besides the patches given here, the RFC also contains couple patches
related to llvm clang
  arm: include: asm: swab: mask rev16 instruction for clang
  arm: include: asm: unified: mask .syntax unified for clang
They are necessarily to verify arm 32 build.

Also, couple more fixes were added but are not merged in bpf-next yet,
they can be needed for verification/configuration steps, if not in
your tree the fixes can be taken here:
https://www.spinics.net/lists/netdev/msg601716.html
https://www.spinics.net/lists/netdev/msg601714.html
https://www.spinics.net/lists/linux-kbuild/msg23468.html

Now, to build samples, SAMPLE_BPF should be enabled in config.

The change touches not only cross-compilation and can have impact on
other archs and build environments, so might be good idea to verify
it in order to add appropriate changes, some warn options could be
tuned also.

All is tested on x86-64 with clang installed (has to be built containing
targets for arm, arm64..., see llc --version, usually it's present already)

Instructions to test native on x86_64
=================================================
Native build on x86_64 is done in usual way and shouldn't have difference
except HOSTCC is now printed as CC wile building the samples.

Instructions to test cross compilation on arm64
=================================================
gcc version 8.3.0
(GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36))

I've used sdk for TI am65x got here:
http://downloads.ti.com/processor-sdk-linux/esd/AM65X/latest/exports/\
ti-processor-sdk-linux-am65xx-evm-06.00.00.07-Linux-x86-Install.bin

make ARCH=arm64 -C tools/ clean
make ARCH=arm64 -C samples/bpf clean
make ARCH=arm64 clean
make ARCH=arm64 defconfig

make ARCH=arm64 headers_install

make ARCH=arm64 INSTALL_HDR_PATH=/../sdk/\
ti-processor-sdk-linux-am65xx-evm-06.00.00.07/linux-devkit/sysroots/\
aarch64-linux/usr headers_install

make samples/bpf/ ARCH=arm64 CROSS_COMPILE="aarch64-linux-gnu-"\
SYSROOT="/../sdk/ti-processor-sdk-linux-am65xx-evm-06.00.00.07/\
linux-devkit/sysroots/aarch64-linux"

Instructions to test cross compilation on arm
=================================================
arm-linux-gnueabihf-gcc (Linaro GCC 7.2-2017.11) 7.2.1 20171011
or
arm-linux-gnueabihf-gcc
(GNU Toolchain for the A-profile Architecture 8.3-2019.03 \
(arm-rel-8.36)) 8.3.0

http://downloads.ti.com/processor-sdk-linux/esd/AM57X/05_03_00_07/exports/\
ti-processor-sdk-linux-am57xx-evm-05.03.00.07-Linux-x86-Install.bin

make ARCH=arm -C tools/ clean
make ARCH=arm -C samples/bpf clean
make ARCH=arm clean
make ARCH=arm omap2plus_defconfig

make ARCH=arm headers_install

make ARCH=arm INSTALL_HDR_PATH=/../sdk/\
ti-processor-sdk-linux-am57xx-evm-05.03.00.07/linux-devkit/sysroots/\
armv7ahf-neon-linux-gnueabi/usr headers_install

make samples/bpf/ ARCH=arm CROSS_COMPILE="arm-linux-gnueabihf-"\
SYSROOT="/../sdk/ti-processor-sdk-linux-am57xx-evm-05.03\
.00.07/linux-devkit/sysroots/armv7ahf-neon-linux-gnueabi"

Based on bpf-next/master

v5..v4:
- any changes, only missed SOBs are added

v4..v3:
- renamed CLANG_EXTRA_CFLAGS on BPF_EXTRA_CFLAGS
- used filter for ARCH_ARM_SELECTOR
- omit "-fomit-frame-pointer" and use same flags for native and "cross"
- used sample/bpf prefixes
- use C instead of C++ compiler for test_libbpf target

v3..v2:
- renamed makefile.progs to makeifle.target, as more appropriate
- left only __LINUX_ARM_ARCH__ for D options for arm
- for host build - left options from KBUILD_HOST for compatibility reasons
- split patch adding c/cxx/ld flags to libbpf by modules
- moved readme change to separate patch
- added patch setting options for cross-compile
- fixed issue with option error for syscall_nrs.S,
  avoiding overlap for ccflags-y.

v2..v1:
- restructured patches order
- split "samples: bpf: Makefile: base progs build on Makefile.progs"
  to make change more readable. It added couple nice extra patches.
- removed redundant patch:
  "samples: bpf: Makefile: remove target for native build"
- added fix:
  "samples: bpf: makefile: fix cookie_uid_helper_example obj build"
- limited -D option filter only for arm
- improved comments
- added couple instructions to verify cross compilation for arm and
  arm64 arches based on TI am57xx and am65xx sdks.
- corrected include a little order
====================

Tested-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
5 years agosamples/bpf: Add preparation steps and sysroot info to readme
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:08 +0000 (03:28 +0300)]
samples/bpf: Add preparation steps and sysroot info to readme

Add couple preparation steps: clean and configuration. Also add newly
added sysroot support info to cross-compile section.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-16-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Add sysroot support
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:07 +0000 (03:28 +0300)]
samples/bpf: Add sysroot support

Basically it only enables that was added by previous couple fixes.
Sysroot contains correct libs installed and its headers. Useful when
working with NFC or virtual machine.

Usage example:

clean (on demand)
    make ARCH=arm -C samples/bpf clean
    make ARCH=arm -C tools clean
    make ARCH=arm clean

configure and install headers:

    make ARCH=arm defconfig
    make ARCH=arm headers_install

build samples/bpf:
    make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- samples/bpf/ \
    SYSROOT="path/to/sysroot"

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-15-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Provide C/LDFLAGS to libbpf
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:06 +0000 (03:28 +0300)]
samples/bpf: Provide C/LDFLAGS to libbpf

In order to build lib using C/LD flags of target arch, provide them
to libbpf make.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-14-ivan.khoronzhuk@linaro.org
5 years agolibbpf: Add C/LDFLAGS to libbpf.so and test_libpf targets
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:05 +0000 (03:28 +0300)]
libbpf: Add C/LDFLAGS to libbpf.so and test_libpf targets

In case of C/LDFLAGS there is no way to pass them correctly to build
command, for instance when --sysroot is used or external libraries
are used, like -lelf, wich can be absent in toolchain. This can be
used for samples/bpf cross-compiling allowing to get elf lib from
sysroot.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-13-ivan.khoronzhuk@linaro.org
5 years agolibbpf: Don't use cxx to test_libpf target
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:04 +0000 (03:28 +0300)]
libbpf: Don't use cxx to test_libpf target

No need to use C++ for test_libbpf target when libbpf is on C and it
can be tested with C, after this change the CXXFLAGS in makefiles can
be avoided, at least in bpf samples, when sysroot is used, passing
same C/LDFLAGS as for lib.

Add "return 0" in test_libbpf to avoid warn, but also remove spaces at
start of the lines to keep same style and avoid warns while apply.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-12-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Use target CC environment for HDR_PROBE
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:03 +0000 (03:28 +0300)]
samples/bpf: Use target CC environment for HDR_PROBE

No need in hacking HOSTCC to be cross-compiler any more, so drop
this trick and use target CC for HDR_PROBE.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-11-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Use own flags but not HOSTCFLAGS
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:02 +0000 (03:28 +0300)]
samples/bpf: Use own flags but not HOSTCFLAGS

While compiling natively, the host's cflags and ldflags are equal to
ones used from HOSTCFLAGS and HOSTLDFLAGS. When cross compiling it
should have own, used for target arch. While verification, for arm,
arm64 and x86_64 the following flags were used always:

-Wall -O2
-fomit-frame-pointer
-Wmissing-prototypes
-Wstrict-prototypes

So, add them as they were verified and used before adding
Makefile.target and lets omit "-fomit-frame-pointer" as were proposed
while review, as no sense in such optimization for samples.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-10-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Base target programs rules on Makefile.target
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:01 +0000 (03:28 +0300)]
samples/bpf: Base target programs rules on Makefile.target

The main reason for that - HOSTCC and CC have different aims.
HOSTCC is used to build programs running on host, that can
cross-comple target programs with CC. It was tested for arm and arm64
cross compilation, based on linaro toolchain, but should work for
others.

So, in order to split cross compilation (CC) with host build (HOSTCC),
lets base samples on Makefile.target. It allows to cross-compile
samples/bpf programs with CC while auxialry tools running on host
built with HOSTCC.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-9-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Add makefile.target for separate CC target build
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:28:00 +0000 (03:28 +0300)]
samples/bpf: Add makefile.target for separate CC target build

The Makefile.target is added only and will be used in
sample/bpf/Makefile later in order to switch cross-compiling to CC
from HOSTCC environment.

The HOSTCC is supposed to build binaries and tools running on the host
afterwards, in order to simplify build or so, like "fixdep" or else.
In case of cross compiling "fixdep" is executed on host when the rest
samples should run on target arch. In order to build binaries for
target arch with CC and tools running on host with HOSTCC, lets add
Makefile.target for simplicity, having definition and routines similar
to ones, used in script/Makefile.host. This allows later add
cross-compilation to samples/bpf with minimum changes.

The tprog stands for target programs built with CC.

Makefile.target contains only stuff needed for samples/bpf, potentially
can be reused later and now needed only for unblocking tricky
samples/bpf cross compilation.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-8-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Drop unnecessarily inclusion for bpf_load
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:59 +0000 (03:27 +0300)]
samples/bpf: Drop unnecessarily inclusion for bpf_load

Drop inclusion for bpf_load -I$(objtree)/usr/include as it is
included for all objects anyway, with above line:
KBUILD_HOSTCFLAGS += -I$(objtree)/usr/include

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-7-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Use __LINUX_ARM_ARCH__ selector for arm
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:58 +0000 (03:27 +0300)]
samples/bpf: Use __LINUX_ARM_ARCH__ selector for arm

For arm, -D__LINUX_ARM_ARCH__=X is min version used as instruction
set selector and is absolutely required while parsing some parts of
headers. It's present in KBUILD_CFLAGS but not in autoconf.h, so let's
retrieve it from and add to programs cflags. In another case errors
like "SMP is not supported" for armv7 and bunch of other errors are
issued resulting to incorrect final object.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191011002808.28206-6-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Use own EXTRA_CFLAGS for clang commands
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:57 +0000 (03:27 +0300)]
samples/bpf: Use own EXTRA_CFLAGS for clang commands

It can overlap with CFLAGS used for libraries built with gcc if
not now then in next patches. Correct it here for simplicity.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-5-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Use --target from cross-compile
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:56 +0000 (03:27 +0300)]
samples/bpf: Use --target from cross-compile

For cross compiling the target triple can be inherited from
cross-compile prefix as it's done in CLANG_FLAGS from kernel makefile.
So copy-paste this decision from kernel Makefile.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-4-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Fix cookie_uid_helper_example obj build
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:55 +0000 (03:27 +0300)]
samples/bpf: Fix cookie_uid_helper_example obj build

Don't list userspace "cookie_uid_helper_example" object in list for
bpf objects.

'always' target is used for listing bpf programs, but
'cookie_uid_helper_example.o' is a user space ELF file, and covered
by rule `per_socket_stats_example`, so shouldn't be in 'always'.
Let us remove `always += cookie_uid_helper_example.o`, which avoids
breaking cross compilation due to mismatched includes.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-3-ivan.khoronzhuk@linaro.org
5 years agosamples/bpf: Fix HDR_PROBE "echo"
Ivan Khoronzhuk [Fri, 11 Oct 2019 00:27:54 +0000 (03:27 +0300)]
samples/bpf: Fix HDR_PROBE "echo"

echo should be replaced with echo -e to handle '\n' correctly, but
instead, replace it with printf as some systems can't handle echo -e.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20191011002808.28206-2-ivan.khoronzhuk@linaro.org
5 years agoMerge branch 'netdevsim-add-devlink-health-reporters-support'
David S. Miller [Sat, 12 Oct 2019 04:04:39 +0000 (21:04 -0700)]
Merge branch 'netdevsim-add-devlink-health-reporters-support'

Jiri Pirko says:

====================
netdevsim: add devlink health reporters support

This patchset adds support for devlink health reporter interface
testing. First 2 patches are small dependencies of the last 2.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests: add netdevsim devlink health tests
Jiri Pirko [Thu, 10 Oct 2019 13:18:51 +0000 (15:18 +0200)]
selftests: add netdevsim devlink health tests

Add basic tests to verify functionality of netdevsim reporters.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonetdevsim: implement couple of testing devlink health reporters
Jiri Pirko [Thu, 10 Oct 2019 13:18:50 +0000 (15:18 +0200)]
netdevsim: implement couple of testing devlink health reporters

Implement "empty" and "dummy" reporters. The first one is really simple
and does nothing. The other one has debugfs files to trigger breakage
and it is able to do recovery. The ops also implement dummy fmsg
content.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodevlink: propagate extack down to health reporter ops
Jiri Pirko [Thu, 10 Oct 2019 13:18:49 +0000 (15:18 +0200)]
devlink: propagate extack down to health reporter ops

During health reporter operations, driver might want to fill-up
the extack message, so propagate extack down to the health reporter ops.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodevlink: don't do reporter recovery if the state is healthy
Jiri Pirko [Thu, 10 Oct 2019 13:18:48 +0000 (15:18 +0200)]
devlink: don't do reporter recovery if the state is healthy

If reporter state is healthy, don't call into a driver for recover and
don't increase recovery count.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: usb: ax88179_178a: write mac to hardware in get_mac_addr
Peter Fink [Thu, 10 Oct 2019 13:00:22 +0000 (15:00 +0200)]
net: usb: ax88179_178a: write mac to hardware in get_mac_addr

When the MAC address is supplied via device tree or a random
MAC is generated it has to be written to the asix chip in
order to receive any data.

Previously in 9fb137aef34e ("net: usb: ax88179_178a: allow
optionally getting mac address from device tree") this line was
omitted because it seemed to work perfectly fine without it.

But it was simply not detected because the chip keeps the mac
stored even beyond a reset and it was tested on a hardware
with an integrated UPS where the asix chip was permanently
powered on even throughout power cycles.

Fixes: 9fb137aef34e ("net: usb: ax88179_178a: allow optionally getting mac address from device tree")
Signed-off-by: Peter Fink <pfink@christ-es.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agosock_get_timeout: drop unnecessary return variable
Vito Caputo [Thu, 10 Oct 2019 04:08:24 +0000 (21:08 -0700)]
sock_get_timeout: drop unnecessary return variable

Remove pointless use of size return variable by directly returning
sizes.

Signed-off-by: Vito Caputo <vcaputo@pengaru.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoaf_unix: __unix_find_socket_byname() cleanup
Vito Caputo [Thu, 10 Oct 2019 03:43:47 +0000 (20:43 -0700)]
af_unix: __unix_find_socket_byname() cleanup

Remove pointless return variable dance.

Appears vestigial from when the function did locking as seen in
unix_find_socket_byinode(), but locking is handled in
unix_find_socket_byname() for __unix_find_socket_byname().

Signed-off-by: Vito Caputo <vcaputo@pengaru.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'net-ftgmac100-Ungate-RCLK-for-RMII-on-ASPEED-MACs'
David S. Miller [Sat, 12 Oct 2019 03:37:38 +0000 (20:37 -0700)]
Merge branch 'net-ftgmac100-Ungate-RCLK-for-RMII-on-ASPEED-MACs'

Andrew Jeffery says:

====================
net: ftgmac100: Ungate RCLK for RMII on ASPEED MACs

This series slightly extends the devicetree binding and driver for the
FTGMAC100 to describe an optional RMII RCLK gate in the clocks property.
Currently it's necessary for the kernel to ungate RCLK on the AST2600 in NCSI
configurations as u-boot does not yet support NCSI (which uses the
R(educed)MII).

v2:
* Clear up Reduced vs Reversed MII in the cover letter
* Mitigate anxiety in the commit message for 1/3
* Clarify that AST2500 is also affected in the clocks property description in
  2/3
* Rework the error paths and update some comments in 3/3

v1 can be found here: https://lore.kernel.org/netdev/20191008115143.14149-1-andrew@aj.id.au/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: ftgmac100: Ungate RCLK for RMII on ASPEED MACs
Andrew Jeffery [Thu, 10 Oct 2019 02:07:56 +0000 (12:37 +1030)]
net: ftgmac100: Ungate RCLK for RMII on ASPEED MACs

The 50MHz RCLK has to be enabled before the RMII interface will function.

Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodt-bindings: net: ftgmac100: Describe clock properties
Andrew Jeffery [Thu, 10 Oct 2019 02:07:55 +0000 (12:37 +1030)]
dt-bindings: net: ftgmac100: Describe clock properties

Critically, the AST2600 requires ungating the RMII RCLK if e.g. NCSI is
in use.

Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Acked-by: Joel Stanley <joel@jms.id.au>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agodt-bindings: net: ftgmac100: Document AST2600 compatible
Andrew Jeffery [Thu, 10 Oct 2019 02:07:54 +0000 (12:37 +1030)]
dt-bindings: net: ftgmac100: Document AST2600 compatible

The AST2600 contains an FTGMAC100-compatible MAC, although the MDIO
controller previously embedded in the MAC has been moved out to a
dedicated MDIO block.

Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Acked-by: Joel Stanley <joel@jms.id.au>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agolibbpf: Handle invalid typedef emitted by old GCC
Andrii Nakryiko [Fri, 11 Oct 2019 03:29:01 +0000 (20:29 -0700)]
libbpf: Handle invalid typedef emitted by old GCC

Old GCC versions are producing invalid typedef for __gnuc_va_list
pointing to void. Special-case this and emit valid:

typedef __builtin_va_list __gnuc_va_list;

Reported-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191011032901.452042-1-andriin@fb.com
5 years agolibbpf: Generate more efficient BPF_CORE_READ code
Andrii Nakryiko [Fri, 11 Oct 2019 02:38:47 +0000 (19:38 -0700)]
libbpf: Generate more efficient BPF_CORE_READ code

Existing BPF_CORE_READ() macro generates slightly suboptimal code. If
there are intermediate pointers to be read, initial source pointer is
going to be assigned into a temporary variable and then temporary
variable is going to be uniformly used as a "source" pointer for all
intermediate pointer reads. Schematically (ignoring all the type casts),
BPF_CORE_READ(s, a, b, c) is expanded into:
({
const void *__t = src;
bpf_probe_read(&__t, sizeof(*__t), &__t->a);
bpf_probe_read(&__t, sizeof(*__t), &__t->b);

typeof(s->a->b->c) __r;
bpf_probe_read(&__r, sizeof(*__r), &__t->c);
})

This initial `__t = src` makes calls more uniform, but causes slightly
less optimal register usage sometimes when compiled with Clang. This can
cascase into, e.g., more register spills.

This patch fixes this issue by generating more optimal sequence:
({
const void *__t;
bpf_probe_read(&__t, sizeof(*__t), &src->a); /* <-- src here */
bpf_probe_read(&__t, sizeof(*__t), &__t->b);

typeof(s->a->b->c) __r;
bpf_probe_read(&__r, sizeof(*__r), &__t->c);
})

Fixes: 7db3822ab991 ("libbpf: Add BPF_CORE_READ/BPF_CORE_READ_INTO helpers")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191011023847.275936-1-andriin@fb.com
5 years agoxdp: Trivial, fix spelling in function description
Anton Ivanov [Fri, 11 Oct 2019 08:43:03 +0000 (09:43 +0100)]
xdp: Trivial, fix spelling in function description

Fix typo 'boolian' into 'boolean'.

Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191011084303.28418-1-anton.ivanov@cambridgegreys.com
5 years agobpf: Fix cast to pointer from integer of different size warning
Andrii Nakryiko [Fri, 11 Oct 2019 17:20:53 +0000 (10:20 -0700)]
bpf: Fix cast to pointer from integer of different size warning

Fix "warning: cast to pointer from integer of different size" when
casting u64 addr to void *.

Fixes: a23740ec43ba ("bpf: Track contents of read-only maps as scalars")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191011172053.2980619-1-andriin@fb.com
5 years agoselftests/bpf: Check that flow dissector can be re-attached
Jakub Sitnicki [Fri, 11 Oct 2019 08:29:46 +0000 (10:29 +0200)]
selftests/bpf: Check that flow dissector can be re-attached

Make sure a new flow dissector program can be attached to replace the old
one with a single syscall. Also check that attaching the same program twice
is prohibited.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191011082946.22695-3-jakub@cloudflare.com
5 years agoflow_dissector: Allow updating the flow dissector program atomically
Jakub Sitnicki [Fri, 11 Oct 2019 08:29:45 +0000 (10:29 +0200)]
flow_dissector: Allow updating the flow dissector program atomically

It is currently not possible to detach the flow dissector program and
attach a new one in an atomic fashion, that is with a single syscall.
Attempts to do so will be met with EEXIST error.

This makes updates to flow dissector program hard. Traffic steering that
relies on BPF-powered flow dissection gets disrupted while old program has
been already detached but the new one has not been attached yet.

There is also a window of opportunity to attach a flow dissector to a
non-root namespace while updating the root flow dissector, thus blocking
the update.

Lastly, the behavior is inconsistent with cgroup BPF programs, which can be
replaced with a single bpf(BPF_PROG_ATTACH, ...) syscall without any
restrictions.

Allow attaching a new flow dissector program when another one is already
present with a restriction that it can't be the same program.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20191011082946.22695-2-jakub@cloudflare.com
5 years agobpf: Align struct bpf_prog_stats
Eric Dumazet [Fri, 11 Oct 2019 18:11:40 +0000 (11:11 -0700)]
bpf: Align struct bpf_prog_stats

Do not risk spanning these small structures on two cache lines.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191011181140.2898-1-edumazet@google.com
5 years agomac80211_hwsim: add support for OCB
Ramon Fontes [Thu, 10 Oct 2019 18:13:07 +0000 (15:13 -0300)]
mac80211_hwsim: add support for OCB

OCB (Outside the Context of a BSS) interfaces are the
implementation of 802.11p, support that.

Signed-off-by: Ramon Fontes <ramonreisfontes@gmail.com>
Link: https://lore.kernel.org/r/20191010181307.11821-2-ramonreisfontes@gmail.com
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agomac80211_hwsim: add more 5GHz channels, 5/10 MHz support
Ramon Fontes [Thu, 10 Oct 2019 18:13:06 +0000 (15:13 -0300)]
mac80211_hwsim: add more 5GHz channels, 5/10 MHz support

These new 5GHz channels and 5/10 MHz support should be
available for OCB usage (802.11p).

Signed-off-by: Ramon Fontes <ramonreisfontes@gmail.com>
Link: https://lore.kernel.org/r/20191010181307.11821-1-ramonreisfontes@gmail.com
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agomac80211: minstrel_ht: rename prob_ewma to prob_avg, use it for the new average
Felix Fietkau [Tue, 8 Oct 2019 17:11:39 +0000 (19:11 +0200)]
mac80211: minstrel_ht: rename prob_ewma to prob_avg, use it for the new average

Reduces per-rate data structure size

Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20191008171139.96476-3-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agomac80211: minstrel_ht: replace rate stats ewma with a better moving average
Felix Fietkau [Tue, 8 Oct 2019 17:11:38 +0000 (19:11 +0200)]
mac80211: minstrel_ht: replace rate stats ewma with a better moving average

Rate success probability usually fluctuates a lot under normal conditions.
With a simple EWMA, noise and fluctuation can be reduced by increasing the
window length, but that comes at the cost of introducing lag on sudden
changes.

This change replaces the EWMA implementation with a moving average that's
designed to significantly reduce lag while keeping a bigger window size
by being better at filtering out noise.

It is only slightly more expensive than the simple EWMA and still avoids
divisions in its calculation.

The algorithm is adapted from an implementation intended for a completely
different field (stock market trading), where the tradeoff of lag vs
noise filtering is equally important. It is based on the "smoothing filter"
from http://www.stockspotter.com/files/PredictiveIndicators.pdf.

I have adapted it to fixed-point math with some constants so that it uses
only addition, bit shifts and multiplication

To better make use of the filtering and bigger window size, the update
interval time is cut in half.

For testing, the algorithm can be reverted to the older one via debugfs

Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20191008171139.96476-2-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agomac80211: minstrel: remove divisions in tx status path
Felix Fietkau [Tue, 8 Oct 2019 17:11:37 +0000 (19:11 +0200)]
mac80211: minstrel: remove divisions in tx status path

Use a slightly different threshold for downgrading spatial streams to
make it easier to calculate without divisions.
Slightly reduces CPU overhead.

Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20191008171139.96476-1-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agonl80211: trivial: Remove redundant loop
Denis Kenzior [Tue, 8 Oct 2019 16:43:50 +0000 (11:43 -0500)]
nl80211: trivial: Remove redundant loop

cfg80211_assign_cookie already checks & prevents a 0 from being
returned, so the explicit loop is unnecessary.

Signed-off-by: Denis Kenzior <denkenz@gmail.com>
Link: https://lore.kernel.org/r/20191008164350.2836-1-denkenz@gmail.com
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
5 years agoipvlan: consolidate TSO flags using NETIF_F_ALL_TSO
Mahesh Bandewar [Wed, 9 Oct 2019 23:20:11 +0000 (16:20 -0700)]
ipvlan: consolidate TSO flags using NETIF_F_ALL_TSO

This will ensure that any new TSO related flags added (which
would be part of ALL_TSO mask and IPvlan driver doesn't need
to update every time new flag gets added.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agoMerge branch 'bpf-romap-known-scalars'
Daniel Borkmann [Thu, 10 Oct 2019 23:49:16 +0000 (01:49 +0200)]
Merge branch 'bpf-romap-known-scalars'

Andrii Nakryiko says:

====================
With BPF maps supporting direct map access (currently, array_map w/ single
element, used for global data) that are read-only both from system call and
BPF side, it's possible for BPF verifier to track its contents as known
constants.

Now it's possible for user-space control app to pre-initialize read-only map
(e.g., for .rodata section) with user-provided flags and parameters and rely
on BPF verifier to detect and eliminate dead code resulting from specific
combination of input parameters.

v1->v2:
- BPF_F_RDONLY means nothing, stick to just map->frozen (Daniel);
- stick to passing just offset into map_direct_value_addr (Martin).
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
5 years agoselftests/bpf: Add read-only map values propagation tests
Andrii Nakryiko [Wed, 9 Oct 2019 20:14:58 +0000 (13:14 -0700)]
selftests/bpf: Add read-only map values propagation tests

Add tests checking that verifier does proper constant propagation for
read-only maps. If constant propagation didn't work, skipp_loop and
part_loop BPF programs would be rejected due to BPF verifier otherwise
not being able to prove they ever complete. With constant propagation,
though, they are succesfully validated as properly terminating loops.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191009201458.2679171-3-andriin@fb.com
5 years agobpf: Track contents of read-only maps as scalars
Andrii Nakryiko [Wed, 9 Oct 2019 20:14:57 +0000 (13:14 -0700)]
bpf: Track contents of read-only maps as scalars

Maps that are read-only both from BPF program side and user space side
have their contents constant, so verifier can track referenced values
precisely and use that knowledge for dead code elimination, branch
pruning, etc. This patch teaches BPF verifier how to do this.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20191009201458.2679171-2-andriin@fb.com
5 years agotc-testing: updated pedit test cases
Roman Mashak [Wed, 9 Oct 2019 20:53:51 +0000 (16:53 -0400)]
tc-testing: updated pedit test cases

Added test case for layered IP operation for a single source IP4/IP6
address and a single destination IP4/IP6 address.

Signed-off-by: Roman Mashak <mrv@mojatatu.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agoptp: ptp_dte: use devm_platform_ioremap_resource() to simplify code
YueHaibing [Wed, 9 Oct 2019 15:03:25 +0000 (23:03 +0800)]
ptp: ptp_dte: use devm_platform_ioremap_resource() to simplify code

Use devm_platform_ioremap_resource() to simplify the code a bit.
This is detected by coccinelle.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agoscripts/bpf: Fix xdp_md forward declaration typo
Andrii Nakryiko [Thu, 10 Oct 2019 04:25:34 +0000 (21:25 -0700)]
scripts/bpf: Fix xdp_md forward declaration typo

Fix typo in struct xpd_md, generated from bpf_helpers_doc.py, which is
causing compilation warnings for programs using bpf_helpers.h

Fixes: 7a387bed47f7 ("scripts/bpf: teach bpf_helpers_doc.py to dump BPF helper definitions")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191010042534.290562-1-andriin@fb.com
5 years agoteam: call RCU read lock when walking the port_list
Hangbin Liu [Wed, 9 Oct 2019 12:18:28 +0000 (20:18 +0800)]
team: call RCU read lock when walking the port_list

Before reading the team port list, we need to acquire the RCU read lock.
Also change list_for_each_entry() to list_for_each_entry_rcu().

v2:
repost the patch to net-next and remove fixes flag as this is a cosmetic
change.

Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet: stmmac: Remove break after a return
Tiezhu Yang [Wed, 9 Oct 2019 14:29:00 +0000 (22:29 +0800)]
net: stmmac: Remove break after a return

Since break is not useful after a return, remove it.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/ethernet: xgmac don't set .driver twice
Ben Dooks [Wed, 9 Oct 2019 13:26:27 +0000 (14:26 +0100)]
net/ethernet: xgmac don't set .driver twice

Cleanup the .driver setup to just do it once, to avoid
the following sparse warning:

drivers/net/ethernet/calxeda/xgmac.c:1914:10: warning: Initializer entry defined twice
drivers/net/ethernet/calxeda/xgmac.c:1920:10:   also defined here

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agoMerge branch 'net-smc-improve-termination-handling'
Jakub Kicinski [Thu, 10 Oct 2019 02:51:59 +0000 (19:51 -0700)]
Merge branch 'net-smc-improve-termination-handling'

Karsten Graul says:

====================
net/smc: improve termination handling

First set of patches to improve termination handling.
====================

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/smc: improve close of terminated socket
Ursula Braun [Wed, 9 Oct 2019 08:07:47 +0000 (10:07 +0200)]
net/smc: improve close of terminated socket

Make sure a terminated SMC socket reaches the CLOSED state.
Even if sending of close flags fails, change the socket state to
the intended state to avoid dangling sockets not reaching the
CLOSED state.

Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/smc: no new connections on disappearing devices
Ursula Braun [Wed, 9 Oct 2019 08:07:46 +0000 (10:07 +0200)]
net/smc: no new connections on disappearing devices

Add a "going_away" indication to ISM devices and IB ports and
avoid creation of new connections on such disappearing devices.

And do not handle ISM events if ISM device is disappearing.

Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/smc: increase device refcount for added link group
Ursula Braun [Wed, 9 Oct 2019 08:07:45 +0000 (10:07 +0200)]
net/smc: increase device refcount for added link group

SMCD link groups belong to certain ISM-devices and SMCR link group
links belong to certain IB-devices. Increase the refcount for
these devices, as long as corresponding link groups exist.

Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/smc: separate locks for SMCD and SMCR link group lists
Ursula Braun [Wed, 9 Oct 2019 08:07:44 +0000 (10:07 +0200)]
net/smc: separate locks for SMCD and SMCR link group lists

This patch introduces separate locks for the split SMCD and SMCR
link group lists.

Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet/smc: separate SMCD and SMCR link group lists
Ursula Braun [Wed, 9 Oct 2019 08:07:43 +0000 (10:07 +0200)]
net/smc: separate SMCD and SMCR link group lists

Currently SMCD and SMCR link groups are maintained in one list.
To facilitate abnormal termination handling they are split into
a separate list for SMCR link groups and separate lists for SMCD
link groups per SMCD device.

Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
5 years agonet: stmmac: dwmac-mediatek: fix wrong delay value issue when resume back
Biao Huang [Wed, 9 Oct 2019 07:33:48 +0000 (15:33 +0800)]
net: stmmac: dwmac-mediatek: fix wrong delay value issue when resume back

mac_delay value will be divided by 550/170 in mt2712_delay_ps2stage(),
which is invoked at the beginning of mt2712_set_delay(), and the value
should be restored at the end of mt2712_set_delay().
Or, mac_delay will be divided again when invoking mt2712_set_delay()
when resume back.
So, add mt2712_delay_stage2ps() to mt2712_set_delay() to recovery the
original mac_delay value.

Signed-off-by: Biao Huang <biao.huang@mediatek.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>