From: Linus Torvalds Date: Tue, 2 Nov 2021 14:56:47 +0000 (-0700) Subject: Merge tag 'x86_core_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git... X-Git-Tag: v6.1-rc5~2767 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=cc0356d6a02e064387c16a83cb96fe43ef33181e;p=platform%2Fkernel%2Flinux-starfive.git Merge tag 'x86_core_for_v5.16_rc1' of git://git./linux/kernel/git/tip/tip Pull x86 core updates from Borislav Petkov: - Do not #GP on userspace use of CLI/STI but pretend it was a NOP to keep old userspace from breaking. Adjust the corresponding iopl selftest to that. - Improve stack overflow warnings to say which stack got overflowed and raise the exception stack sizes to 2 pages since overflowing the single page of exception stack is very easy to do nowadays with all the tracing machinery enabled. With that, rip out the custom mapping of AMD SEV's too. - A bunch of changes in preparation for FGKASLR like supporting more than 64K section headers in the relocs tool, correct ORC lookup table size to cover the whole kernel .text and other adjustments. * tag 'x86_core_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: selftests/x86/iopl: Adjust to the faked iopl CLI/STI usage vmlinux.lds.h: Have ORC lookup cover entire _etext - _stext x86/boot/compressed: Avoid duplicate malloc() implementations x86/boot: Allow a "silent" kaslr random byte fetch x86/tools/relocs: Support >64K section headers x86/sev: Make the #VC exception stacks part of the default stacks storage x86: Increase exception stack sizes x86/mm/64: Improve stack overflow warnings x86/iopl: Fake iopl(3) CLI/STI usage --- cc0356d6a02e064387c16a83cb96fe43ef33181e diff --cc arch/x86/include/asm/irq_stack.h index 7dcc2b0,8d55bd1..ae9d40f --- a/arch/x86/include/asm/irq_stack.h +++ b/arch/x86/include/asm/irq_stack.h @@@ -185,10 -201,6 +201,7 @@@ IRQ_CONSTRAINTS, regs, vector); \ } +#ifndef CONFIG_PREEMPT_RT - #define ASM_CALL_SOFTIRQ \ - "call %P[__func] \n" - /* * Macro to invoke __do_softirq on the irq stack. This is only called from * task context when bottom halves are about to be reenabled and soft