KVM: MMU: Store nx bit for large page shadows

We need to distinguish between large page shadows which have the nx bit set
and those which don't. The problem shows up when booting a newer smp Linux
kernel, where the trampoline page (which is in real mode, which uses the
same shadow pages as large pages) is using the same mapping as a kernel data
page, which is mapped using nx, causing kvm to spin on that page.

Signed-off-by: Avi Kivity <avi@qumranet.com>

+4 -2
+2 -2
drivers/kvm/kvm.h
··· 121 * bits 4:7 - page table level for this shadow (1-4) 122 * bits 8:9 - page table quadrant for 2-level guests 123 * bit 16 - "metaphysical" - gfn is not a real page (huge page/real mode) 124 - * bits 17:18 - "access" - the user and writable bits of a huge page pde 125 */ 126 union kvm_mmu_page_role { 127 unsigned word; ··· 131 unsigned quadrant : 2; 132 unsigned pad_for_nice_hex_output : 6; 133 unsigned metaphysical : 1; 134 - unsigned hugepage_access : 2; 135 }; 136 }; 137
··· 121 * bits 4:7 - page table level for this shadow (1-4) 122 * bits 8:9 - page table quadrant for 2-level guests 123 * bit 16 - "metaphysical" - gfn is not a real page (huge page/real mode) 124 + * bits 17:19 - "access" - the user, writable, and nx bits of a huge page pde 125 */ 126 union kvm_mmu_page_role { 127 unsigned word; ··· 131 unsigned quadrant : 2; 132 unsigned pad_for_nice_hex_output : 6; 133 unsigned metaphysical : 1; 134 + unsigned hugepage_access : 3; 135 }; 136 }; 137
+2
drivers/kvm/paging_tmpl.h
··· 366 metaphysical = 1; 367 hugepage_access = *guest_ent; 368 hugepage_access &= PT_USER_MASK | PT_WRITABLE_MASK; 369 hugepage_access >>= PT_WRITABLE_SHIFT; 370 table_gfn = (*guest_ent & PT_BASE_ADDR_MASK) 371 >> PAGE_SHIFT;
··· 366 metaphysical = 1; 367 hugepage_access = *guest_ent; 368 hugepage_access &= PT_USER_MASK | PT_WRITABLE_MASK; 369 + if (*guest_ent & PT64_NX_MASK) 370 + hugepage_access |= (1 << 2); 371 hugepage_access >>= PT_WRITABLE_SHIFT; 372 table_gfn = (*guest_ent & PT_BASE_ADDR_MASK) 373 >> PAGE_SHIFT;