bpf: fix nested bpf tracepoints with per-cpu data
authorMatt Mullins <mmullins@fb.com>
Tue, 11 Jun 2019 21:53:04 +0000 (14:53 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 3 Jul 2019 11:14:48 +0000 (13:14 +0200)
commita7177b94aff4febe657fe31bb7e5ecdef72079f4
tree06a65e780c52e1053c7cee746cbbdea0a9827148
parent4992d4af588156a1b90853d6f61918d3b7ab5278
bpf: fix nested bpf tracepoints with per-cpu data

commit 9594dc3c7e71b9f52bee1d7852eb3d4e3aea9e99 upstream.

BPF_PROG_TYPE_RAW_TRACEPOINTs can be executed nested on the same CPU, as
they do not increment bpf_prog_active while executing.

This enables three levels of nesting, to support
  - a kprobe or raw tp or perf event,
  - another one of the above that irq context happens to call, and
  - another one in nmi context
(at most one of which may be a kprobe or perf event).

Fixes: 20b9d7ac4852 ("bpf: avoid excessive stack usage for perf_sample_data")
Signed-off-by: Matt Mullins <mmullins@fb.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/trace/bpf_trace.c