From: Linus Torvalds Date: Wed, 15 May 2019 23:05:47 +0000 (-0700) Subject: Merge tag 'trace-v5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux... X-Git-Tag: v5.4-rc1~978 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=d2d8b146043ae7e250aef1fb312971f6f479d487;p=platform%2Fkernel%2Flinux-rpi.git Merge tag 'trace-v5.2' of git://git./linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The major changes in this tracing update includes: - Removal of non-DYNAMIC_FTRACE from 32bit x86 - Removal of mcount support from x86 - Emulating a call from int3 on x86_64, fixes live kernel patching - Consolidated Tracing Error logs file Minor updates: - Removal of klp_check_compiler_support() - kdb ftrace dumping output changes - Accessing and creating ftrace instances from inside the kernel - Clean up of #define if macro - Introduction of TRACE_EVENT_NOP() to disable trace events based on config options And other minor fixes and clean ups" * tag 'trace-v5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (44 commits) x86: Hide the int3_emulate_call/jmp functions from UML livepatch: Remove klp_check_compiler_support() ftrace/x86: Remove mcount support ftrace/x86_32: Remove support for non DYNAMIC_FTRACE tracing: Simplify "if" macro code tracing: Fix documentation about disabling options using trace_options tracing: Replace kzalloc with kcalloc tracing: Fix partial reading of trace event's id file tracing: Allow RCU to run between postponed startup tests tracing: Fix white space issues in parse_pred() function tracing: Eliminate const char[] auto variables ring-buffer: Fix mispelling of Calculate tracing: probeevent: Fix to make the type of $comm string tracing: probeevent: Do not accumulate on ret variable tracing: uprobes: Re-enable $comm support for uprobe events ftrace/x86_64: Emulate call function while updating in breakpoint handler x86_64: Allow breakpoints to emulate call instructions x86_64: Add gap to int3 to allow for call emulation tracing: kdb: Allow ftdump to skip all but the last few entries tracing: Add trace_total_entries() / trace_total_entries_cpu() ... --- d2d8b146043ae7e250aef1fb312971f6f479d487 diff --cc arch/x86/Kconfig index 326b2d5,0544041..21e9f2f --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@@ -29,8 -28,20 +29,19 @@@ config X86_6 select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE select SWIOTLB - select X86_DEV_DMA_OPS select ARCH_HAS_SYSCALL_WRAPPER + config FORCE_DYNAMIC_FTRACE + def_bool y + depends on X86_32 + depends on FUNCTION_TRACER + select DYNAMIC_FTRACE + help + We keep the static function tracing (!DYNAMIC_FTRACE) around + in order to test the non static function tracing in the + generic code, as other architectures still use it. But we + only need to keep it around for x86_64. No need to keep it + for x86_32. For x86_32, force DYNAMIC_FTRACE. # # Arch settings # diff --cc arch/x86/entry/entry_64.S index 20e45d9,27fcc6f..11aa3b2 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@@ -878,7 -879,7 +878,7 @@@ apicinterrupt IRQ_WORK_VECTOR irq_wor * @paranoid == 2 is special: the stub will never switch stacks. This is for * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS. */ - .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 -.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 create_gap=0 ++.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 create_gap=0 ENTRY(\sym) UNWIND_HINT_IRET_REGS offset=\has_error_code*8 @@@ -1128,8 -1143,8 +1142,8 @@@ apicinterrupt3 HYPERV_STIMER0_VECTOR hv_stimer0_callback_vector hv_stimer0_vector_handler #endif /* CONFIG_HYPERV */ -idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK +idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=IST_INDEX_DB ist_offset=DB_STACK_OFFSET - idtentry int3 do_int3 has_error_code=0 + idtentry int3 do_int3 has_error_code=0 create_gap=1 idtentry stack_segment do_stack_segment has_error_code=1 #ifdef CONFIG_XEN_PV diff --cc arch/x86/include/asm/text-patching.h index c90678f,0bbb07e..880b551 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@@ -35,11 -35,38 +35,41 @@@ extern void text_poke_early(void *addr * inconsistent instruction while you patch. */ extern void *text_poke(void *addr, const void *opcode, size_t len); +extern void *text_poke_kgdb(void *addr, const void *opcode, size_t len); extern int poke_int3_handler(struct pt_regs *regs); -extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); +extern void text_poke_bp(void *addr, const void *opcode, size_t len, void *handler); extern int after_bootmem; +extern __ro_after_init struct mm_struct *poking_mm; +extern __ro_after_init unsigned long poking_addr; + #ifndef CONFIG_UML_X86 + static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip) + { + regs->ip = ip; + } + + #define INT3_INSN_SIZE 1 + #define CALL_INSN_SIZE 5 + + #ifdef CONFIG_X86_64 + static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val) + { + /* + * The int3 handler in entry_64.S adds a gap between the + * stack where the break point happened, and the saving of + * pt_regs. We can extend the original stack because of + * this gap. See the idtentry macro's create_gap option. + */ + regs->sp -= sizeof(unsigned long); + *(unsigned long *)regs->sp = val; + } + + static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func) + { + int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE); + int3_emulate_jmp(regs, func); + } + #endif /* CONFIG_X86_64 */ + #endif /* !CONFIG_UML_X86 */ + #endif /* _ASM_X86_TEXT_PATCHING_H */