trace_seq_printf(..., "%s", ...) can be done with trace_seq_puts()
instead, avoiding printf overhead. In the second instance, the string
we're copying was just created from an snprintf() to a stack buffer, so
we might as well do that printf directly. This naturally leads to moving
the declaration of the str buffer inside the CONFIG_KALLSYMS guard,
which in turn will make gcc inline the function for !CONFIG_KALLSYMS (it
only has a single caller, but the huge stack frame seems to make gcc not
inline it for CONFIG_KALLSYMS).
Link: http://lkml.kernel.org/r/20181029223542.26175-4-linux@rasmusvillemoes.dk
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
static void
seq_print_sym(struct trace_seq *s, unsigned long address, bool offset)
{
- char str[KSYM_SYMBOL_LEN];
#ifdef CONFIG_KALLSYMS
+ char str[KSYM_SYMBOL_LEN];
const char *name;
if (offset)
name = kretprobed(str);
if (name && strlen(name)) {
- trace_seq_printf(s, "%s", name);
+ trace_seq_puts(s, name);
return;
}
#endif
- snprintf(str, KSYM_SYMBOL_LEN, "0x%08lx", address);
- trace_seq_printf(s, "%s", str);
+ trace_seq_printf(s, "0x%08lx", address);
}
#ifndef CONFIG_64BIT