Move to the new fpu__*() namespace.
Reviewed-by: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Note, some FPU functions are already explicitly preempt safe. For example,
kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.
-However, math_state_restore must be called with preemption disabled.
+However, fpu__restore() must be called with preemption disabled.
RULE #3: Lock acquire and release must be performed by same task
extern void fpu__flush_thread(struct task_struct *tsk);
extern int dump_fpu(struct pt_regs *, struct user_i387_struct *);
-extern void math_state_restore(void);
+extern void fpu__restore(void);
extern bool irq_fpu_usable(void);
}
/*
- * 'math_state_restore()' saves the current math information in the
+ * 'fpu__restore()' saves the current math information in the
* old math state array, and gets the new ones from the current task
*
* Careful.. There are problems with IBM-designed IRQ13 behaviour.
* Must be called with kernel preemption disabled (eg with local
* local interrupts as in the case of do_device_not_available).
*/
-void math_state_restore(void)
+void fpu__restore(void)
{
struct task_struct *tsk = current;
}
kernel_fpu_enable();
}
-EXPORT_SYMBOL_GPL(math_state_restore);
+EXPORT_SYMBOL_GPL(fpu__restore);
void fpu__flush_thread(struct task_struct *tsk)
{
set_used_math();
if (use_eager_fpu()) {
preempt_disable();
- math_state_restore();
+ fpu__restore();
preempt_enable();
}
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
* the GDT and LDT are properly updated, and must be
- * done before math_state_restore, so the TS bit is up
+ * done before fpu__restore(), so the TS bit is up
* to date.
*/
arch_end_context_switch(next_p);
* Leave lazy mode, flushing any hypercalls made here. This
* must be done after loading TLS entries in the GDT but before
* loading segments that might reference them, and and it must
- * be done before math_state_restore, so the TS bit is up to
+ * be done before fpu__restore(), so the TS bit is up to
* date.
*/
arch_end_context_switch(next_p);
return;
}
#endif
- math_state_restore(); /* interrupts still off */
+ fpu__restore(); /* interrupts still off */
#ifdef CONFIG_X86_32
conditional_sti(regs);
#endif
/*
* Similarly, if we took a trap because the Guest used the FPU,
* we have to restore the FPU it expects to see.
- * math_state_restore() may sleep and we may even move off to
+ * fpu__restore() may sleep and we may even move off to
* a different CPU. So all the critical stuff should be done
* before this.
*/
else if (cpu->regs->trapnum == 7 && !user_has_fpu())
- math_state_restore();
+ fpu__restore();
}
/*H:130