x86/power/32: Move SYSENTER MSR restoration to fix_processor_context()
authorAndy Lutomirski <luto@kernel.org>
Thu, 14 Dec 2017 21:19:06 +0000 (13:19 -0800)
committerIngo Molnar <mingo@kernel.org>
Fri, 15 Dec 2017 11:18:29 +0000 (12:18 +0100)
x86_64 restores system call MSRs in fix_processor_context(), and
x86_32 restored them along with segment registers.  The 64-bit
variant makes more sense, so move the 32-bit code to match the
64-bit code.

No side effects are expected to runtime behavior.

Tested-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Zhang Rui <rui.zhang@intel.com>
Link: http://lkml.kernel.org/r/65158f8d7ee64dd6bbc6c1c83b3b34aaa854e3ae.1513286253.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/power/cpu.c

index 472bc8c8212b57f15a9368f07b8ea55959f28d64..033c61e6891bcf385de6a75636165e32057b010d 100644 (file)
@@ -174,6 +174,9 @@ static void fix_processor_context(void)
        write_gdt_entry(desc, GDT_ENTRY_TSS, &tss, DESC_TSS);
 
        syscall_init();                         /* This sets MSR_*STAR and related */
+#else
+       if (boot_cpu_has(X86_FEATURE_SEP))
+               enable_sep_cpu();
 #endif
        load_TR_desc();                         /* This does ltr */
        load_mm_ldt(current->active_mm);        /* This does lldt */
@@ -237,12 +240,6 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
        loadsegment(fs, ctxt->fs);
        loadsegment(gs, ctxt->gs);
        loadsegment(ss, ctxt->ss);
-
-       /*
-        * sysenter MSRs
-        */
-       if (boot_cpu_has(X86_FEATURE_SEP))
-               enable_sep_cpu();
 #else
 /* CONFIG_X86_64 */
        asm volatile ("movw %0, %%ds" :: "r" (ctxt->ds));