From: Mark Rutland Date: Tue, 21 Mar 2023 14:04:24 +0000 (+0100) Subject: ftrace: selftest: remove broken trace_direct_tramp X-Git-Tag: v6.6.7~2890^2~43 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=fee86a4ed536f4e521f3a4530242e152dd2a466b;p=platform%2Fkernel%2Flinux-starfive.git ftrace: selftest: remove broken trace_direct_tramp The ftrace selftest code has a trace_direct_tramp() function which it uses as a direct call trampoline. This happens to work on x86, since the direct call's return address is in the usual place, and can be returned to via a RET, but in general the calling convention for direct calls is different from regular function calls, and requires a trampoline written in assembly. On s390, regular function calls place the return address in %r14, and an ftrace patch-site in an instrumented function places the trampoline's return address (which is within the instrumented function) in %r0, preserving the original %r14 value in-place. As a regular C function will return to the address in %r14, using a C function as the trampoline results in the trampoline returning to the caller of the instrumented function, skipping the body of the instrumented function. Note that the s390 issue is not detcted by the ftrace selftest code, as the instrumented function is trivial, and returning back into the caller happens to be equivalent. On arm64, regular function calls place the return address in x30, and an ftrace patch-site in an instrumented function saves this into r9 and places the trampoline's return address (within the instrumented function) in x30. A regular C function will return to the address in x30, but will not restore x9 into x30. Consequently, using a C function as the trampoline results in returning to the trampoline's return address having corrupted x30, such that when the instrumented function returns, it will return back into itself. To avoid future issues in this area, remove the trace_direct_tramp() function, and require that each architecture with direct calls provides a stub trampoline, named ftrace_stub_direct_tramp. This can be written to handle the architecture's trampoline calling convention, and in future could be used elsewhere (e.g. in the ftrace ops sample, to measure the overhead of direct calls), so we may as well always build it in. Link: https://lkml.kernel.org/r/20230321140424.345218-8-revest@chromium.org Signed-off-by: Mark Rutland Cc: Li Huafei Cc: Xu Kuohai Signed-off-by: Florent Revest Acked-by: Jiri Olsa Signed-off-by: Steven Rostedt (Google) --- diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S index 43ff910..6c10da4 100644 --- a/arch/s390/kernel/mcount.S +++ b/arch/s390/kernel/mcount.S @@ -32,6 +32,11 @@ ENTRY(ftrace_stub) BR_EX %r14 ENDPROC(ftrace_stub) +SYM_CODE_START(ftrace_stub_direct_tramp) + lgr %r1, %r0 + BR_EX %r1 +SYM_CODE_END(ftrace_stub_direct_tramp) + .macro ftrace_regs_entry, allregs=0 stg %r14,(__SF_GPRS+8*8)(%r15) # save traced function caller diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S index a0ed0e4..0d9a145 100644 --- a/arch/x86/kernel/ftrace_32.S +++ b/arch/x86/kernel/ftrace_32.S @@ -163,6 +163,11 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL) jmp .Lftrace_ret SYM_CODE_END(ftrace_regs_caller) +SYM_FUNC_START(ftrace_stub_direct_tramp) + CALL_DEPTH_ACCOUNT + RET +SYM_FUNC_END(ftrace_stub_direct_tramp) + #ifdef CONFIG_FUNCTION_GRAPH_TRACER SYM_CODE_START(ftrace_graph_caller) pushl %eax diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index fb4f1e0..970d844 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -309,6 +309,10 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL) SYM_FUNC_END(ftrace_regs_caller) STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller) +SYM_FUNC_START(ftrace_stub_direct_tramp) + CALL_DEPTH_ACCOUNT + RET +SYM_FUNC_END(ftrace_stub_direct_tramp) #else /* ! CONFIG_DYNAMIC_FTRACE */ diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 31f1e1d..931f3d9 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -413,6 +413,8 @@ int unregister_ftrace_direct(struct ftrace_ops *ops, unsigned long addr, int modify_ftrace_direct(struct ftrace_ops *ops, unsigned long addr); int modify_ftrace_direct_nolock(struct ftrace_ops *ops, unsigned long addr); +void ftrace_stub_direct_tramp(void); + #else struct ftrace_ops; # define ftrace_direct_func_count 0 diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 84cd7ba..a931d9a 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c @@ -786,14 +786,6 @@ static struct fgraph_ops fgraph_ops __initdata = { #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS static struct ftrace_ops direct; -#ifndef CALL_DEPTH_ACCOUNT -#define CALL_DEPTH_ACCOUNT "" -#endif - -noinline __noclone static void trace_direct_tramp(void) -{ - asm(CALL_DEPTH_ACCOUNT); -} #endif /* @@ -873,7 +865,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, */ ftrace_set_filter_ip(&direct, (unsigned long)DYN_FTRACE_TEST_NAME, 0, 0); ret = register_ftrace_direct(&direct, - (unsigned long)trace_direct_tramp); + (unsigned long)ftrace_stub_direct_tramp); if (ret) goto out; @@ -894,7 +886,7 @@ trace_selftest_startup_function_graph(struct tracer *trace, unregister_ftrace_graph(&fgraph_ops); ret = unregister_ftrace_direct(&direct, - (unsigned long) trace_direct_tramp, + (unsigned long)ftrace_stub_direct_tramp, true); if (ret) goto out;