x86/mm/64: Tighten up vmalloc_fault() sanity checks on 5-level kernels

On a 5-level kernel, if a non-init mm has a top-level entry, it needs to
match init_mm's, but the vmalloc_fault() code skipped over the BUG_ON()
that would have checked it.

While we're at it, get rid of the rather confusing 4-level folded "pgd"
logic.

Cleans-up: b50858ce3e2a ("x86/mm/vmalloc: Add 5-level paging support")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Neil Berrington <neil.berrington@datacore.com>
Link: https://lkml.kernel.org/r/2ae598f8c279b0a29baf75df207e6f2fdddc0a1b.1516914529.git.luto@kernel.org

authored by Andy Lutomirski and committed by Thomas Gleixner 36b3a772 5beda7d5

Changed files
+9 -13
arch
x86
mm
+9 -13
arch/x86/mm/fault.c
··· 439 439 if (pgd_none(*pgd_ref)) 440 440 return -1; 441 441 442 - if (pgd_none(*pgd)) { 443 - set_pgd(pgd, *pgd_ref); 444 - arch_flush_lazy_mmu_mode(); 445 - } else if (CONFIG_PGTABLE_LEVELS > 4) { 446 - /* 447 - * With folded p4d, pgd_none() is always false, so the pgd may 448 - * point to an empty page table entry and pgd_page_vaddr() 449 - * will return garbage. 450 - * 451 - * We will do the correct sanity check on the p4d level. 452 - */ 453 - BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref)); 442 + if (CONFIG_PGTABLE_LEVELS > 4) { 443 + if (pgd_none(*pgd)) { 444 + set_pgd(pgd, *pgd_ref); 445 + arch_flush_lazy_mmu_mode(); 446 + } else { 447 + BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref)); 448 + } 454 449 } 455 450 456 451 /* With 4-level paging, copying happens on the p4d level. */ ··· 454 459 if (p4d_none(*p4d_ref)) 455 460 return -1; 456 461 457 - if (p4d_none(*p4d)) { 462 + if (p4d_none(*p4d) && CONFIG_PGTABLE_LEVELS == 4) { 458 463 set_p4d(p4d, *p4d_ref); 459 464 arch_flush_lazy_mmu_mode(); 460 465 } else { ··· 465 470 * Below here mismatches are bugs because these lower tables 466 471 * are shared: 467 472 */ 473 + BUILD_BUG_ON(CONFIG_PGTABLE_LEVELS < 4); 468 474 469 475 pud = pud_offset(p4d, address); 470 476 pud_ref = pud_offset(p4d_ref, address);