tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible
authorOleg Nesterov <oleg@redhat.com>
Tue, 6 Aug 2013 16:08:47 +0000 (18:08 +0200)
committerSteven Rostedt <rostedt@goodmis.org>
Wed, 14 Aug 2013 01:06:30 +0000 (21:06 -0400)
commitd027e6a9c83440bf1ca9e5503539d58d8e0914f1
tree8f7397e15dd463c878939fa9d063adf167580573
parent12473965c38a527a0c6f7a38d23edce60957f873
tracing/perf: Avoid perf_trace_buf_*() in perf_trace_##call() when possible

perf_trace_buf_prepare() + perf_trace_buf_submit(task => NULL)
make no sense if hlist_empty(head). Change perf_trace_##call()
to check ->perf_events beforehand and do nothing if it is empty.

This removes the overhead for tasks without events associated
with them. For example, "perf record -e sched:sched_switch -p1"
attaches the counter(s) to the single task, but every task in
system will do perf_trace_buf_prepare/submit() just to realize
that it was not attached to this event.

However, we can only do this if __task == NULL, so we also add
the __builtin_constant_p(__task) check.

With this patch "perf bench sched pipe" shows approximately 4%
improvement when "perf record -p1" runs in parallel, many thanks
to Steven for the testing.

Link: http://lkml.kernel.org/r/20130806160847.GA2746@redhat.com
Tested-by: David Ahern <dsahern@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
include/trace/ftrace.h