x86/mm/64: Remove the last VM_BUG_ON() from the TLB code

Let's avoid hard-to-diagnose crashes in the future.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/f423bbc97864089fbdeb813f1ea126c6eaed844a.1508000261.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by Andy Lutomirski and committed by Ingo Molnar e8b9b0cc 723f2828

+2 -2
+2 -2
arch/x86/mm/tlb.c
··· 147 147 this_cpu_write(cpu_tlbstate.is_lazy, false); 148 148 149 149 if (real_prev == next) { 150 - VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != 151 - next->context.ctx_id); 150 + VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != 151 + next->context.ctx_id); 152 152 153 153 /* 154 154 * We don't currently support having a real mm loaded without