x86/nmi/64: Fix a paravirt stack-clobbering bug in the NMI code

The NMI entry code that switches to the normal kernel stack needs to
be very careful not to clobber any extra stack slots on the NMI
stack. The code is fine under the assumption that SWAPGS is just a
normal instruction, but that assumption isn't really true. Use
SWAPGS_UNSAFE_STACK instead.

This is part of a fix for some random crashes that Sasha saw.

Fixes: 9b6e6a8334d5 ("x86/nmi/64: Switch stacks on userspace NMI entry")
Reported-and-tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/974bc40edffdb5c2950a5c4977f821a446b76178.1442791737.git.luto@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

authored by Andy Lutomirski and committed by Thomas Gleixner 83c133cf fc57a7c6

Changed files
+4 -1
arch
x86
entry
+4 -1
arch/x86/entry/entry_64.S
··· 1190 1190 * we don't want to enable interrupts, because then we'll end 1191 1191 * up in an awkward situation in which IRQs are on but NMIs 1192 1192 * are off. 1193 + * 1194 + * We also must not push anything to the stack before switching 1195 + * stacks lest we corrupt the "NMI executing" variable. 1193 1196 */ 1194 1197 1195 - SWAPGS 1198 + SWAPGS_UNSAFE_STACK 1196 1199 cld 1197 1200 movq %rsp, %rdx 1198 1201 movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp