Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/fpu: Rename math_state_restore() to fpu__restore()

Move to the new fpu__*() namespace.

Reviewed-by: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>

+11 -11
+1 -1
Documentation/preempt-locking.txt
··· 48 48 49 49 Note, some FPU functions are already explicitly preempt safe. For example, 50 50 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 51 - However, math_state_restore must be called with preemption disabled. 51 + However, fpu__restore() must be called with preemption disabled. 52 52 53 53 54 54 RULE #3: Lock acquire and release must be performed by same task
+1 -1
arch/x86/include/asm/i387.h
··· 23 23 extern void fpu__flush_thread(struct task_struct *tsk); 24 24 25 25 extern int dump_fpu(struct pt_regs *, struct user_i387_struct *); 26 - extern void math_state_restore(void); 26 + extern void fpu__restore(void); 27 27 28 28 extern bool irq_fpu_usable(void); 29 29
+3 -3
arch/x86/kernel/fpu/core.c
··· 228 228 } 229 229 230 230 /* 231 - * 'math_state_restore()' saves the current math information in the 231 + * 'fpu__restore()' saves the current math information in the 232 232 * old math state array, and gets the new ones from the current task 233 233 * 234 234 * Careful.. There are problems with IBM-designed IRQ13 behaviour. ··· 237 237 * Must be called with kernel preemption disabled (eg with local 238 238 * local interrupts as in the case of do_device_not_available). 239 239 */ 240 - void math_state_restore(void) 240 + void fpu__restore(void) 241 241 { 242 242 struct task_struct *tsk = current; 243 243 ··· 267 267 } 268 268 kernel_fpu_enable(); 269 269 } 270 - EXPORT_SYMBOL_GPL(math_state_restore); 270 + EXPORT_SYMBOL_GPL(fpu__restore); 271 271 272 272 void fpu__flush_thread(struct task_struct *tsk) 273 273 {
+1 -1
arch/x86/kernel/fpu/xsave.c
··· 404 404 set_used_math(); 405 405 if (use_eager_fpu()) { 406 406 preempt_disable(); 407 - math_state_restore(); 407 + fpu__restore(); 408 408 preempt_enable(); 409 409 } 410 410
+1 -1
arch/x86/kernel/process_32.c
··· 295 295 * Leave lazy mode, flushing any hypercalls made here. 296 296 * This must be done before restoring TLS segments so 297 297 * the GDT and LDT are properly updated, and must be 298 - * done before math_state_restore, so the TS bit is up 298 + * done before fpu__restore(), so the TS bit is up 299 299 * to date. 300 300 */ 301 301 arch_end_context_switch(next_p);
+1 -1
arch/x86/kernel/process_64.c
··· 298 298 * Leave lazy mode, flushing any hypercalls made here. This 299 299 * must be done after loading TLS entries in the GDT but before 300 300 * loading segments that might reference them, and and it must 301 - * be done before math_state_restore, so the TS bit is up to 301 + * be done before fpu__restore(), so the TS bit is up to 302 302 * date. 303 303 */ 304 304 arch_end_context_switch(next_p);
+1 -1
arch/x86/kernel/traps.c
··· 846 846 return; 847 847 } 848 848 #endif 849 - math_state_restore(); /* interrupts still off */ 849 + fpu__restore(); /* interrupts still off */ 850 850 #ifdef CONFIG_X86_32 851 851 conditional_sti(regs); 852 852 #endif
+2 -2
drivers/lguest/x86/core.c
··· 297 297 /* 298 298 * Similarly, if we took a trap because the Guest used the FPU, 299 299 * we have to restore the FPU it expects to see. 300 - * math_state_restore() may sleep and we may even move off to 300 + * fpu__restore() may sleep and we may even move off to 301 301 * a different CPU. So all the critical stuff should be done 302 302 * before this. 303 303 */ 304 304 else if (cpu->regs->trapnum == 7 && !user_has_fpu()) 305 - math_state_restore(); 305 + fpu__restore(); 306 306 } 307 307 308 308 /*H:130