Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
"ARM:

- Fix the handling of ZCR_EL2 in NV VMs

- Pick the correct translation regime when doing a PTW on the back of
a SEA

- Prevent userspace from injecting an event into a vcpu that isn't
initialised yet

- Move timer save/restore to the sysreg handling code, fixing EL2
timer access in the process

- Add FGT-based trapping of MDSCR_EL1 to reduce the overhead of debug

- Fix trapping configuration when the host isn't GICv3

- Improve the detection of HCR_EL2.E2H being RES1

- Drop a spurious 'break' statement in the S1 PTW

- Don't try to access SPE when owned by EL3

Documentation updates:

- Document the failure modes of event injection

- Document that a GICv3 guest can be created on a GICv5 host with
FEAT_GCIE_LEGACY

Selftest improvements:

- Add a selftest for the effective value of HCR_EL2.AMO

- Address build warning in the timer selftest when building with
clang

- Teach irqfd selftests about non-x86 architectures

- Add missing sysregs to the set_id_regs selftest

- Fix vcpu allocation in the vgic_lpi_stress selftest

- Correctly enable interrupts in the vgic_lpi_stress selftest

x86:

- Expand the KVM_PRE_FAULT_MEMORY selftest to add a regression test
for the bug fixed by commit 3ccbf6f47098 ("KVM: x86/mmu: Return
-EAGAIN if userspace deletes/moves memslot during prefault")

- Don't try to get PMU capabilities from perf when running a CPU with
hybrid CPUs/PMUs, as perf will rightly WARN.

guest_memfd:

- Rework KVM_CAP_GUEST_MEMFD_MMAP (newly introduced in 6.18) into a
more generic KVM_CAP_GUEST_MEMFD_FLAGS

- Add a guest_memfd INIT_SHARED flag and require userspace to
explicitly set said flag to initialize memory as SHARED,
irrespective of MMAP.

The behavior merged in 6.18 is that enabling mmap() implicitly
initializes memory as SHARED, which would result in an ABI
collision for x86 CoCo VMs as their memory is currently always
initialized PRIVATE.

- Allow mmap() on guest_memfd for x86 CoCo VMs, i.e. on VMs with
private memory, to enable testing such setups, i.e. to hopefully
flush out any other lurking ABI issues before 6.18 is officially
released.

- Add testcases to the guest_memfd selftest to cover guest_memfd
without MMAP, and host userspace accesses to mmap()'d private
memory"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (46 commits)
arm64: Revamp HCR_EL2.E2H RES1 detection
KVM: arm64: nv: Use FGT write trap of MDSCR_EL1 when available
KVM: arm64: Compute per-vCPU FGTs at vcpu_load()
KVM: arm64: selftests: Fix misleading comment about virtual timer encoding
KVM: arm64: selftests: Add an E2H=0-specific configuration to get_reg_list
KVM: arm64: selftests: Make dependencies on VHE-specific registers explicit
KVM: arm64: Kill leftovers of ad-hoc timer userspace access
KVM: arm64: Fix WFxT handling of nested virt
KVM: arm64: Move CNT*CT_EL0 userspace accessors to generic infrastructure
KVM: arm64: Move CNT*_CVAL_EL0 userspace accessors to generic infrastructure
KVM: arm64: Move CNT*_CTL_EL0 userspace accessors to generic infrastructure
KVM: arm64: Add timer UAPI workaround to sysreg infrastructure
KVM: arm64: Make timer_set_offset() generally accessible
KVM: arm64: Replace timer context vcpu pointer with timer_id
KVM: arm64: Introduce timer_context_to_vcpu() helper
KVM: arm64: Hide CNTHV_*_EL2 from userspace for nVHE guests
Documentation: KVM: Update GICv3 docs for GICv5 hosts
KVM: arm64: gic-v3: Only set ICH_HCR traps for v2-on-v3 or v3 guests
KVM: arm64: selftests: Actually enable IRQs in vgic_lpi_stress
KVM: arm64: selftests: Allocate vcpus with correct size
...

+941 -540
+17 -3
Documentation/virt/kvm/api.rst
··· 1229 1229 KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always delivered 1230 1230 directly to the virtual CPU). 1231 1231 1232 + Calling this ioctl on a vCPU that hasn't been initialized will return 1233 + -ENOEXEC. 1234 + 1232 1235 :: 1233 1236 1234 1237 struct kvm_vcpu_events { ··· 1312 1309 1313 1310 See KVM_GET_VCPU_EVENTS for the data structure. 1314 1311 1312 + Calling this ioctl on a vCPU that hasn't been initialized will return 1313 + -ENOEXEC. 1315 1314 1316 1315 4.33 KVM_GET_DEBUGREGS 1317 1316 ---------------------- ··· 6437 6432 guest_memfd range is not allowed (any number of memory regions can be bound to 6438 6433 a single guest_memfd file, but the bound ranges must not overlap). 6439 6434 6440 - When the capability KVM_CAP_GUEST_MEMFD_MMAP is supported, the 'flags' field 6441 - supports GUEST_MEMFD_FLAG_MMAP. Setting this flag on guest_memfd creation 6442 - enables mmap() and faulting of guest_memfd memory to host userspace. 6435 + The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be 6436 + specified via KVM_CREATE_GUEST_MEMFD. Currently defined flags: 6437 + 6438 + ============================ ================================================ 6439 + GUEST_MEMFD_FLAG_MMAP Enable using mmap() on the guest_memfd file 6440 + descriptor. 6441 + GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during 6442 + KVM_CREATE_GUEST_MEMFD (memory files created 6443 + without INIT_SHARED will be marked private). 6444 + Shared memory can be faulted into host userspace 6445 + page tables. Private memory cannot. 6446 + ============================ ================================================ 6443 6447 6444 6448 When the KVM MMU performs a PFN lookup to service a guest fault and the backing 6445 6449 guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
+2 -1
Documentation/virt/kvm/devices/arm-vgic-v3.rst
··· 13 13 to inject interrupts to the VGIC instead of directly to CPUs. It is not 14 14 possible to create both a GICv3 and GICv2 on the same VM. 15 15 16 - Creating a guest GICv3 device requires a host GICv3 as well. 16 + Creating a guest GICv3 device requires a host GICv3 host, or a GICv5 host with 17 + support for FEAT_GCIE_LEGACY. 17 18 18 19 19 20 Groups:
+32 -6
arch/arm64/include/asm/el2_setup.h
··· 24 24 * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it 25 25 * can reset into an UNKNOWN state and might not read as 1 until it has 26 26 * been initialized explicitly. 27 - * 28 - * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but 29 - * don't advertise it (they predate this relaxation). 30 - * 31 27 * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H 32 28 * indicating whether the CPU is running in E2H mode. 33 29 */ 34 30 mrs_s x1, SYS_ID_AA64MMFR4_EL1 35 31 sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH 36 32 cmp x1, #0 37 - b.ge .LnVHE_\@ 33 + b.lt .LnE2H0_\@ 38 34 35 + /* 36 + * Unfortunately, HCR_EL2.E2H can be RES1 even if not advertised 37 + * as such via ID_AA64MMFR4_EL1.E2H0: 38 + * 39 + * - Fruity CPUs predate the !FEAT_E2H0 relaxation, and seem to 40 + * have HCR_EL2.E2H implemented as RAO/WI. 41 + * 42 + * - On CPUs that lack FEAT_FGT, a hypervisor can't trap guest 43 + * reads of ID_AA64MMFR4_EL1 to advertise !FEAT_E2H0. NV 44 + * guests on these hosts can write to HCR_EL2.E2H without 45 + * trapping to the hypervisor, but these writes have no 46 + * functional effect. 47 + * 48 + * Handle both cases by checking for an essential VHE property 49 + * (system register remapping) to decide whether we're 50 + * effectively VHE-only or not. 51 + */ 52 + msr_hcr_el2 x0 // Setup HCR_EL2 as nVHE 53 + isb 54 + mov x1, #1 // Write something to FAR_EL1 55 + msr far_el1, x1 56 + isb 57 + mov x1, #2 // Try to overwrite it via FAR_EL2 58 + msr far_el2, x1 59 + isb 60 + mrs x1, far_el1 // If we see the latest write in FAR_EL1, 61 + cmp x1, #2 // we can safely assume we are VHE only. 62 + b.ne .LnVHE_\@ // Otherwise, we know that nVHE works. 63 + 64 + .LnE2H0_\@: 39 65 orr x0, x0, #HCR_E2H 40 - .LnVHE_\@: 41 66 msr_hcr_el2 x0 42 67 isb 68 + .LnVHE_\@: 43 69 .endm 44 70 45 71 .macro __init_el2_sctlr
+50
arch/arm64/include/asm/kvm_host.h
··· 816 816 u64 hcrx_el2; 817 817 u64 mdcr_el2; 818 818 819 + struct { 820 + u64 r; 821 + u64 w; 822 + } fgt[__NR_FGT_GROUP_IDS__]; 823 + 819 824 /* Exception Information */ 820 825 struct kvm_vcpu_fault_info fault; 821 826 ··· 1605 1600 void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt); 1606 1601 void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1); 1607 1602 void check_feature_map(void); 1603 + void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu); 1608 1604 1605 + static __always_inline enum fgt_group_id __fgt_reg_to_group_id(enum vcpu_sysreg reg) 1606 + { 1607 + switch (reg) { 1608 + case HFGRTR_EL2: 1609 + case HFGWTR_EL2: 1610 + return HFGRTR_GROUP; 1611 + case HFGITR_EL2: 1612 + return HFGITR_GROUP; 1613 + case HDFGRTR_EL2: 1614 + case HDFGWTR_EL2: 1615 + return HDFGRTR_GROUP; 1616 + case HAFGRTR_EL2: 1617 + return HAFGRTR_GROUP; 1618 + case HFGRTR2_EL2: 1619 + case HFGWTR2_EL2: 1620 + return HFGRTR2_GROUP; 1621 + case HFGITR2_EL2: 1622 + return HFGITR2_GROUP; 1623 + case HDFGRTR2_EL2: 1624 + case HDFGWTR2_EL2: 1625 + return HDFGRTR2_GROUP; 1626 + default: 1627 + BUILD_BUG_ON(1); 1628 + } 1629 + } 1630 + 1631 + #define vcpu_fgt(vcpu, reg) \ 1632 + ({ \ 1633 + enum fgt_group_id id = __fgt_reg_to_group_id(reg); \ 1634 + u64 *p; \ 1635 + switch (reg) { \ 1636 + case HFGWTR_EL2: \ 1637 + case HDFGWTR_EL2: \ 1638 + case HFGWTR2_EL2: \ 1639 + case HDFGWTR2_EL2: \ 1640 + p = &(vcpu)->arch.fgt[id].w; \ 1641 + break; \ 1642 + default: \ 1643 + p = &(vcpu)->arch.fgt[id].r; \ 1644 + break; \ 1645 + } \ 1646 + \ 1647 + p; \ 1648 + }) 1609 1649 1610 1650 #endif /* __ARM64_KVM_HOST_H__ */
+14 -91
arch/arm64/kvm/arch_timer.c
··· 66 66 67 67 u32 timer_get_ctl(struct arch_timer_context *ctxt) 68 68 { 69 - struct kvm_vcpu *vcpu = ctxt->vcpu; 69 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 70 70 71 71 switch(arch_timer_ctx_index(ctxt)) { 72 72 case TIMER_VTIMER: ··· 85 85 86 86 u64 timer_get_cval(struct arch_timer_context *ctxt) 87 87 { 88 - struct kvm_vcpu *vcpu = ctxt->vcpu; 88 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 89 89 90 90 switch(arch_timer_ctx_index(ctxt)) { 91 91 case TIMER_VTIMER: ··· 104 104 105 105 static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl) 106 106 { 107 - struct kvm_vcpu *vcpu = ctxt->vcpu; 107 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 108 108 109 109 switch(arch_timer_ctx_index(ctxt)) { 110 110 case TIMER_VTIMER: ··· 126 126 127 127 static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval) 128 128 { 129 - struct kvm_vcpu *vcpu = ctxt->vcpu; 129 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 130 130 131 131 switch(arch_timer_ctx_index(ctxt)) { 132 132 case TIMER_VTIMER: ··· 144 144 default: 145 145 WARN_ON(1); 146 146 } 147 - } 148 - 149 - static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) 150 - { 151 - if (!ctxt->offset.vm_offset) { 152 - WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt)); 153 - return; 154 - } 155 - 156 - WRITE_ONCE(*ctxt->offset.vm_offset, offset); 157 147 } 158 148 159 149 u64 kvm_phys_timer_read(void) ··· 333 343 u64 ns; 334 344 335 345 ctx = container_of(hrt, struct arch_timer_context, hrtimer); 336 - vcpu = ctx->vcpu; 346 + vcpu = timer_context_to_vcpu(ctx); 337 347 338 348 trace_kvm_timer_hrtimer_expire(ctx); 339 349 ··· 426 436 * 427 437 * But hey, it's fast, right? 428 438 */ 429 - if (is_hyp_ctxt(ctx->vcpu) && 430 - (ctx == vcpu_vtimer(ctx->vcpu) || ctx == vcpu_ptimer(ctx->vcpu))) { 439 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctx); 440 + if (is_hyp_ctxt(vcpu) && 441 + (ctx == vcpu_vtimer(vcpu) || ctx == vcpu_ptimer(vcpu))) { 431 442 unsigned long val = timer_get_ctl(ctx); 432 443 __assign_bit(__ffs(ARCH_TIMER_CTRL_IT_STAT), &val, level); 433 444 timer_set_ctl(ctx, val); ··· 461 470 trace_kvm_timer_emulate(ctx, should_fire); 462 471 463 472 if (should_fire != ctx->irq.level) 464 - kvm_timer_update_irq(ctx->vcpu, should_fire, ctx); 473 + kvm_timer_update_irq(timer_context_to_vcpu(ctx), should_fire, ctx); 465 474 466 475 kvm_timer_update_status(ctx, should_fire); 467 476 ··· 489 498 490 499 static void timer_save_state(struct arch_timer_context *ctx) 491 500 { 492 - struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu); 501 + struct arch_timer_cpu *timer = vcpu_timer(timer_context_to_vcpu(ctx)); 493 502 enum kvm_arch_timers index = arch_timer_ctx_index(ctx); 494 503 unsigned long flags; 495 504 ··· 600 609 601 610 static void timer_restore_state(struct arch_timer_context *ctx) 602 611 { 603 - struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu); 612 + struct arch_timer_cpu *timer = vcpu_timer(timer_context_to_vcpu(ctx)); 604 613 enum kvm_arch_timers index = arch_timer_ctx_index(ctx); 605 614 unsigned long flags; 606 615 ··· 659 668 660 669 static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx) 661 670 { 662 - struct kvm_vcpu *vcpu = ctx->vcpu; 671 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctx); 663 672 bool phys_active = false; 664 673 665 674 /* ··· 668 677 * this point and the register restoration, we'll take the 669 678 * interrupt anyway. 670 679 */ 671 - kvm_timer_update_irq(ctx->vcpu, kvm_timer_should_fire(ctx), ctx); 680 + kvm_timer_update_irq(vcpu, kvm_timer_should_fire(ctx), ctx); 672 681 673 682 if (irqchip_in_kernel(vcpu->kvm)) 674 683 phys_active = kvm_vgic_map_is_active(vcpu, timer_irq(ctx)); ··· 1054 1063 struct arch_timer_context *ctxt = vcpu_get_timer(vcpu, timerid); 1055 1064 struct kvm *kvm = vcpu->kvm; 1056 1065 1057 - ctxt->vcpu = vcpu; 1066 + ctxt->timer_id = timerid; 1058 1067 1059 1068 if (timerid == TIMER_VTIMER) 1060 1069 ctxt->offset.vm_offset = &kvm->arch.timer_data.voffset; ··· 1112 1121 disable_percpu_irq(host_ptimer_irq); 1113 1122 } 1114 1123 1115 - int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value) 1116 - { 1117 - struct arch_timer_context *timer; 1118 - 1119 - switch (regid) { 1120 - case KVM_REG_ARM_TIMER_CTL: 1121 - timer = vcpu_vtimer(vcpu); 1122 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value); 1123 - break; 1124 - case KVM_REG_ARM_TIMER_CNT: 1125 - if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, 1126 - &vcpu->kvm->arch.flags)) { 1127 - timer = vcpu_vtimer(vcpu); 1128 - timer_set_offset(timer, kvm_phys_timer_read() - value); 1129 - } 1130 - break; 1131 - case KVM_REG_ARM_TIMER_CVAL: 1132 - timer = vcpu_vtimer(vcpu); 1133 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value); 1134 - break; 1135 - case KVM_REG_ARM_PTIMER_CTL: 1136 - timer = vcpu_ptimer(vcpu); 1137 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value); 1138 - break; 1139 - case KVM_REG_ARM_PTIMER_CNT: 1140 - if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, 1141 - &vcpu->kvm->arch.flags)) { 1142 - timer = vcpu_ptimer(vcpu); 1143 - timer_set_offset(timer, kvm_phys_timer_read() - value); 1144 - } 1145 - break; 1146 - case KVM_REG_ARM_PTIMER_CVAL: 1147 - timer = vcpu_ptimer(vcpu); 1148 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value); 1149 - break; 1150 - 1151 - default: 1152 - return -1; 1153 - } 1154 - 1155 - return 0; 1156 - } 1157 - 1158 1124 static u64 read_timer_ctl(struct arch_timer_context *timer) 1159 1125 { 1160 1126 /* ··· 1126 1178 ctl |= ARCH_TIMER_CTRL_IT_STAT; 1127 1179 1128 1180 return ctl; 1129 - } 1130 - 1131 - u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid) 1132 - { 1133 - switch (regid) { 1134 - case KVM_REG_ARM_TIMER_CTL: 1135 - return kvm_arm_timer_read(vcpu, 1136 - vcpu_vtimer(vcpu), TIMER_REG_CTL); 1137 - case KVM_REG_ARM_TIMER_CNT: 1138 - return kvm_arm_timer_read(vcpu, 1139 - vcpu_vtimer(vcpu), TIMER_REG_CNT); 1140 - case KVM_REG_ARM_TIMER_CVAL: 1141 - return kvm_arm_timer_read(vcpu, 1142 - vcpu_vtimer(vcpu), TIMER_REG_CVAL); 1143 - case KVM_REG_ARM_PTIMER_CTL: 1144 - return kvm_arm_timer_read(vcpu, 1145 - vcpu_ptimer(vcpu), TIMER_REG_CTL); 1146 - case KVM_REG_ARM_PTIMER_CNT: 1147 - return kvm_arm_timer_read(vcpu, 1148 - vcpu_ptimer(vcpu), TIMER_REG_CNT); 1149 - case KVM_REG_ARM_PTIMER_CVAL: 1150 - return kvm_arm_timer_read(vcpu, 1151 - vcpu_ptimer(vcpu), TIMER_REG_CVAL); 1152 - } 1153 - return (u64)-1; 1154 1181 } 1155 1182 1156 1183 static u64 kvm_arm_timer_read(struct kvm_vcpu *vcpu,
+7
arch/arm64/kvm/arm.c
··· 642 642 vcpu->arch.hcr_el2 |= HCR_TWI; 643 643 644 644 vcpu_set_pauth_traps(vcpu); 645 + kvm_vcpu_load_fgt(vcpu); 645 646 646 647 if (is_protected_kvm_enabled()) { 647 648 kvm_call_hyp_nvhe(__pkvm_vcpu_load, ··· 1795 1794 case KVM_GET_VCPU_EVENTS: { 1796 1795 struct kvm_vcpu_events events; 1797 1796 1797 + if (!kvm_vcpu_initialized(vcpu)) 1798 + return -ENOEXEC; 1799 + 1798 1800 if (kvm_arm_vcpu_get_events(vcpu, &events)) 1799 1801 return -EINVAL; 1800 1802 ··· 1808 1804 } 1809 1805 case KVM_SET_VCPU_EVENTS: { 1810 1806 struct kvm_vcpu_events events; 1807 + 1808 + if (!kvm_vcpu_initialized(vcpu)) 1809 + return -ENOEXEC; 1811 1810 1812 1811 if (copy_from_user(&events, argp, sizeof(events))) 1813 1812 return -EFAULT;
+5 -2
arch/arm64/kvm/at.c
··· 91 91 case OP_AT_S1E2W: 92 92 case OP_AT_S1E2A: 93 93 return vcpu_el2_e2h_is_set(vcpu) ? TR_EL20 : TR_EL2; 94 - break; 95 94 default: 96 95 return (vcpu_el2_e2h_is_set(vcpu) && 97 96 vcpu_el2_tge_is_set(vcpu)) ? TR_EL20 : TR_EL10; ··· 1601 1602 .fn = match_s1_desc, 1602 1603 .priv = &dm, 1603 1604 }, 1604 - .regime = TR_EL10, 1605 1605 .as_el0 = false, 1606 1606 .pan = false, 1607 1607 }; 1608 1608 struct s1_walk_result wr = {}; 1609 1609 int ret; 1610 + 1611 + if (is_hyp_ctxt(vcpu)) 1612 + wi.regime = vcpu_el2_e2h_is_set(vcpu) ? TR_EL20 : TR_EL2; 1613 + else 1614 + wi.regime = TR_EL10; 1610 1615 1611 1616 ret = setup_s1_walk(vcpu, &wi, &wr, va); 1612 1617 if (ret)
+90
arch/arm64/kvm/config.c
··· 5 5 */ 6 6 7 7 #include <linux/kvm_host.h> 8 + #include <asm/kvm_emulate.h> 9 + #include <asm/kvm_nested.h> 8 10 #include <asm/sysreg.h> 9 11 10 12 /* ··· 1429 1427 *res0 = *res1 = 0; 1430 1428 break; 1431 1429 } 1430 + } 1431 + 1432 + static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg) 1433 + { 1434 + switch (reg) { 1435 + case HFGRTR_EL2: 1436 + return &hfgrtr_masks; 1437 + case HFGWTR_EL2: 1438 + return &hfgwtr_masks; 1439 + case HFGITR_EL2: 1440 + return &hfgitr_masks; 1441 + case HDFGRTR_EL2: 1442 + return &hdfgrtr_masks; 1443 + case HDFGWTR_EL2: 1444 + return &hdfgwtr_masks; 1445 + case HAFGRTR_EL2: 1446 + return &hafgrtr_masks; 1447 + case HFGRTR2_EL2: 1448 + return &hfgrtr2_masks; 1449 + case HFGWTR2_EL2: 1450 + return &hfgwtr2_masks; 1451 + case HFGITR2_EL2: 1452 + return &hfgitr2_masks; 1453 + case HDFGRTR2_EL2: 1454 + return &hdfgrtr2_masks; 1455 + case HDFGWTR2_EL2: 1456 + return &hdfgwtr2_masks; 1457 + default: 1458 + BUILD_BUG_ON(1); 1459 + } 1460 + } 1461 + 1462 + static __always_inline void __compute_fgt(struct kvm_vcpu *vcpu, enum vcpu_sysreg reg) 1463 + { 1464 + u64 fgu = vcpu->kvm->arch.fgu[__fgt_reg_to_group_id(reg)]; 1465 + struct fgt_masks *m = __fgt_reg_to_masks(reg); 1466 + u64 clear = 0, set = 0, val = m->nmask; 1467 + 1468 + set |= fgu & m->mask; 1469 + clear |= fgu & m->nmask; 1470 + 1471 + if (is_nested_ctxt(vcpu)) { 1472 + u64 nested = __vcpu_sys_reg(vcpu, reg); 1473 + set |= nested & m->mask; 1474 + clear |= ~nested & m->nmask; 1475 + } 1476 + 1477 + val |= set; 1478 + val &= ~clear; 1479 + *vcpu_fgt(vcpu, reg) = val; 1480 + } 1481 + 1482 + static void __compute_hfgwtr(struct kvm_vcpu *vcpu) 1483 + { 1484 + __compute_fgt(vcpu, HFGWTR_EL2); 1485 + 1486 + if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38)) 1487 + *vcpu_fgt(vcpu, HFGWTR_EL2) |= HFGWTR_EL2_TCR_EL1; 1488 + } 1489 + 1490 + static void __compute_hdfgwtr(struct kvm_vcpu *vcpu) 1491 + { 1492 + __compute_fgt(vcpu, HDFGWTR_EL2); 1493 + 1494 + if (is_hyp_ctxt(vcpu)) 1495 + *vcpu_fgt(vcpu, HDFGWTR_EL2) |= HDFGWTR_EL2_MDSCR_EL1; 1496 + } 1497 + 1498 + void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) 1499 + { 1500 + if (!cpus_have_final_cap(ARM64_HAS_FGT)) 1501 + return; 1502 + 1503 + __compute_fgt(vcpu, HFGRTR_EL2); 1504 + __compute_hfgwtr(vcpu); 1505 + __compute_fgt(vcpu, HFGITR_EL2); 1506 + __compute_fgt(vcpu, HDFGRTR_EL2); 1507 + __compute_hdfgwtr(vcpu); 1508 + __compute_fgt(vcpu, HAFGRTR_EL2); 1509 + 1510 + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) 1511 + return; 1512 + 1513 + __compute_fgt(vcpu, HFGRTR2_EL2); 1514 + __compute_fgt(vcpu, HFGWTR2_EL2); 1515 + __compute_fgt(vcpu, HFGITR2_EL2); 1516 + __compute_fgt(vcpu, HDFGRTR2_EL2); 1517 + __compute_fgt(vcpu, HDFGWTR2_EL2); 1432 1518 }
+10 -5
arch/arm64/kvm/debug.c
··· 15 15 #include <asm/kvm_arm.h> 16 16 #include <asm/kvm_emulate.h> 17 17 18 + static int cpu_has_spe(u64 dfr0) 19 + { 20 + return cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && 21 + !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P); 22 + } 23 + 18 24 /** 19 25 * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value 20 26 * ··· 83 77 *host_data_ptr(debug_brps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr0); 84 78 *host_data_ptr(debug_wrps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr0); 85 79 80 + if (cpu_has_spe(dfr0)) 81 + host_data_set_flag(HAS_SPE); 82 + 86 83 if (has_vhe()) 87 84 return; 88 - 89 - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && 90 - !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P)) 91 - host_data_set_flag(HAS_SPE); 92 85 93 86 /* Check if we have BRBE implemented and available at the host */ 94 87 if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT)) ··· 107 102 void kvm_debug_init_vhe(void) 108 103 { 109 104 /* Clear PMSCR_EL1.E{0,1}SPE which reset to UNKNOWN values. */ 110 - if (SYS_FIELD_GET(ID_AA64DFR0_EL1, PMSVer, read_sysreg(id_aa64dfr0_el1))) 105 + if (host_data_test_flag(HAS_SPE)) 111 106 write_sysreg_el1(0, SYS_PMSCR); 112 107 } 113 108
-70
arch/arm64/kvm/guest.c
··· 591 591 return copy_core_reg_indices(vcpu, NULL); 592 592 } 593 593 594 - static const u64 timer_reg_list[] = { 595 - KVM_REG_ARM_TIMER_CTL, 596 - KVM_REG_ARM_TIMER_CNT, 597 - KVM_REG_ARM_TIMER_CVAL, 598 - KVM_REG_ARM_PTIMER_CTL, 599 - KVM_REG_ARM_PTIMER_CNT, 600 - KVM_REG_ARM_PTIMER_CVAL, 601 - }; 602 - 603 - #define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list) 604 - 605 - static bool is_timer_reg(u64 index) 606 - { 607 - switch (index) { 608 - case KVM_REG_ARM_TIMER_CTL: 609 - case KVM_REG_ARM_TIMER_CNT: 610 - case KVM_REG_ARM_TIMER_CVAL: 611 - case KVM_REG_ARM_PTIMER_CTL: 612 - case KVM_REG_ARM_PTIMER_CNT: 613 - case KVM_REG_ARM_PTIMER_CVAL: 614 - return true; 615 - } 616 - return false; 617 - } 618 - 619 - static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) 620 - { 621 - for (int i = 0; i < NUM_TIMER_REGS; i++) { 622 - if (put_user(timer_reg_list[i], uindices)) 623 - return -EFAULT; 624 - uindices++; 625 - } 626 - 627 - return 0; 628 - } 629 - 630 - static int set_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 631 - { 632 - void __user *uaddr = (void __user *)(long)reg->addr; 633 - u64 val; 634 - int ret; 635 - 636 - ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)); 637 - if (ret != 0) 638 - return -EFAULT; 639 - 640 - return kvm_arm_timer_set_reg(vcpu, reg->id, val); 641 - } 642 - 643 - static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 644 - { 645 - void __user *uaddr = (void __user *)(long)reg->addr; 646 - u64 val; 647 - 648 - val = kvm_arm_timer_get_reg(vcpu, reg->id); 649 - return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; 650 - } 651 - 652 594 static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) 653 595 { 654 596 const unsigned int slices = vcpu_sve_slices(vcpu); ··· 666 724 res += num_sve_regs(vcpu); 667 725 res += kvm_arm_num_sys_reg_descs(vcpu); 668 726 res += kvm_arm_get_fw_num_regs(vcpu); 669 - res += NUM_TIMER_REGS; 670 727 671 728 return res; 672 729 } ··· 696 755 return ret; 697 756 uindices += kvm_arm_get_fw_num_regs(vcpu); 698 757 699 - ret = copy_timer_indices(vcpu, uindices); 700 - if (ret < 0) 701 - return ret; 702 - uindices += NUM_TIMER_REGS; 703 - 704 758 return kvm_arm_copy_sys_reg_indices(vcpu, uindices); 705 759 } 706 760 ··· 713 777 case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); 714 778 } 715 779 716 - if (is_timer_reg(reg->id)) 717 - return get_timer_reg(vcpu, reg); 718 - 719 780 return kvm_arm_sys_reg_get_reg(vcpu, reg); 720 781 } 721 782 ··· 729 796 return kvm_arm_set_fw_reg(vcpu, reg); 730 797 case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); 731 798 } 732 - 733 - if (is_timer_reg(reg->id)) 734 - return set_timer_reg(vcpu, reg); 735 799 736 800 return kvm_arm_sys_reg_set_reg(vcpu, reg); 737 801 }
+6 -1
arch/arm64/kvm/handle_exit.c
··· 147 147 if (esr & ESR_ELx_WFx_ISS_RV) { 148 148 u64 val, now; 149 149 150 - now = kvm_arm_timer_get_reg(vcpu, KVM_REG_ARM_TIMER_CNT); 150 + now = kvm_phys_timer_read(); 151 + if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu)) 152 + now -= timer_get_offset(vcpu_hvtimer(vcpu)); 153 + else 154 + now -= timer_get_offset(vcpu_vtimer(vcpu)); 155 + 151 156 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu)); 152 157 153 158 if (now >= val)
+17 -131
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 195 195 __deactivate_cptr_traps_nvhe(vcpu); 196 196 } 197 197 198 - #define reg_to_fgt_masks(reg) \ 199 - ({ \ 200 - struct fgt_masks *m; \ 201 - switch(reg) { \ 202 - case HFGRTR_EL2: \ 203 - m = &hfgrtr_masks; \ 204 - break; \ 205 - case HFGWTR_EL2: \ 206 - m = &hfgwtr_masks; \ 207 - break; \ 208 - case HFGITR_EL2: \ 209 - m = &hfgitr_masks; \ 210 - break; \ 211 - case HDFGRTR_EL2: \ 212 - m = &hdfgrtr_masks; \ 213 - break; \ 214 - case HDFGWTR_EL2: \ 215 - m = &hdfgwtr_masks; \ 216 - break; \ 217 - case HAFGRTR_EL2: \ 218 - m = &hafgrtr_masks; \ 219 - break; \ 220 - case HFGRTR2_EL2: \ 221 - m = &hfgrtr2_masks; \ 222 - break; \ 223 - case HFGWTR2_EL2: \ 224 - m = &hfgwtr2_masks; \ 225 - break; \ 226 - case HFGITR2_EL2: \ 227 - m = &hfgitr2_masks; \ 228 - break; \ 229 - case HDFGRTR2_EL2: \ 230 - m = &hdfgrtr2_masks; \ 231 - break; \ 232 - case HDFGWTR2_EL2: \ 233 - m = &hdfgwtr2_masks; \ 234 - break; \ 235 - default: \ 236 - BUILD_BUG_ON(1); \ 237 - } \ 238 - \ 239 - m; \ 240 - }) 241 - 242 - #define compute_clr_set(vcpu, reg, clr, set) \ 243 - do { \ 244 - u64 hfg = __vcpu_sys_reg(vcpu, reg); \ 245 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 246 - set |= hfg & m->mask; \ 247 - clr |= ~hfg & m->nmask; \ 248 - } while(0) 249 - 250 - #define reg_to_fgt_group_id(reg) \ 251 - ({ \ 252 - enum fgt_group_id id; \ 253 - switch(reg) { \ 254 - case HFGRTR_EL2: \ 255 - case HFGWTR_EL2: \ 256 - id = HFGRTR_GROUP; \ 257 - break; \ 258 - case HFGITR_EL2: \ 259 - id = HFGITR_GROUP; \ 260 - break; \ 261 - case HDFGRTR_EL2: \ 262 - case HDFGWTR_EL2: \ 263 - id = HDFGRTR_GROUP; \ 264 - break; \ 265 - case HAFGRTR_EL2: \ 266 - id = HAFGRTR_GROUP; \ 267 - break; \ 268 - case HFGRTR2_EL2: \ 269 - case HFGWTR2_EL2: \ 270 - id = HFGRTR2_GROUP; \ 271 - break; \ 272 - case HFGITR2_EL2: \ 273 - id = HFGITR2_GROUP; \ 274 - break; \ 275 - case HDFGRTR2_EL2: \ 276 - case HDFGWTR2_EL2: \ 277 - id = HDFGRTR2_GROUP; \ 278 - break; \ 279 - default: \ 280 - BUILD_BUG_ON(1); \ 281 - } \ 282 - \ 283 - id; \ 284 - }) 285 - 286 - #define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ 287 - do { \ 288 - u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ 289 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 290 - set |= hfg & m->mask; \ 291 - clr |= hfg & m->nmask; \ 292 - } while(0) 293 - 294 - #define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \ 295 - do { \ 296 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 297 - u64 c = clr, s = set; \ 298 - u64 val; \ 299 - \ 300 - ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \ 301 - if (is_nested_ctxt(vcpu)) \ 302 - compute_clr_set(vcpu, reg, c, s); \ 303 - \ 304 - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ 305 - \ 306 - val = m->nmask; \ 307 - val |= s; \ 308 - val &= ~c; \ 309 - write_sysreg_s(val, SYS_ ## reg); \ 310 - } while(0) 311 - 312 - #define update_fgt_traps(hctxt, vcpu, kvm, reg) \ 313 - update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0) 314 - 315 198 static inline bool cpu_has_amu(void) 316 199 { 317 200 u64 pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1); ··· 203 320 ID_AA64PFR0_EL1_AMU_SHIFT); 204 321 } 205 322 323 + #define __activate_fgt(hctxt, vcpu, reg) \ 324 + do { \ 325 + ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \ 326 + write_sysreg_s(*vcpu_fgt(vcpu, reg), SYS_ ## reg); \ 327 + } while (0) 328 + 206 329 static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) 207 330 { 208 331 struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt); 209 - struct kvm *kvm = kern_hyp_va(vcpu->kvm); 210 332 211 333 if (!cpus_have_final_cap(ARM64_HAS_FGT)) 212 334 return; 213 335 214 - update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2); 215 - update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0, 216 - cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ? 217 - HFGWTR_EL2_TCR_EL1_MASK : 0); 218 - update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2); 219 - update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2); 220 - update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2); 336 + __activate_fgt(hctxt, vcpu, HFGRTR_EL2); 337 + __activate_fgt(hctxt, vcpu, HFGWTR_EL2); 338 + __activate_fgt(hctxt, vcpu, HFGITR_EL2); 339 + __activate_fgt(hctxt, vcpu, HDFGRTR_EL2); 340 + __activate_fgt(hctxt, vcpu, HDFGWTR_EL2); 221 341 222 342 if (cpu_has_amu()) 223 - update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2); 343 + __activate_fgt(hctxt, vcpu, HAFGRTR_EL2); 224 344 225 345 if (!cpus_have_final_cap(ARM64_HAS_FGT2)) 226 346 return; 227 347 228 - update_fgt_traps(hctxt, vcpu, kvm, HFGRTR2_EL2); 229 - update_fgt_traps(hctxt, vcpu, kvm, HFGWTR2_EL2); 230 - update_fgt_traps(hctxt, vcpu, kvm, HFGITR2_EL2); 231 - update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR2_EL2); 232 - update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR2_EL2); 348 + __activate_fgt(hctxt, vcpu, HFGRTR2_EL2); 349 + __activate_fgt(hctxt, vcpu, HFGWTR2_EL2); 350 + __activate_fgt(hctxt, vcpu, HFGITR2_EL2); 351 + __activate_fgt(hctxt, vcpu, HDFGRTR2_EL2); 352 + __activate_fgt(hctxt, vcpu, HDFGWTR2_EL2); 233 353 } 234 354 235 355 #define __deactivate_fgt(htcxt, vcpu, reg) \
+1
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 172 172 173 173 /* Trust the host for non-protected vcpu features. */ 174 174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2; 175 + memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt)); 175 176 return 0; 176 177 } 177 178
+6 -3
arch/arm64/kvm/nested.c
··· 1859 1859 { 1860 1860 u64 guest_mdcr = __vcpu_sys_reg(vcpu, MDCR_EL2); 1861 1861 1862 + if (is_nested_ctxt(vcpu)) 1863 + vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE); 1862 1864 /* 1863 1865 * In yet another example where FEAT_NV2 is fscking broken, accesses 1864 1866 * to MDSCR_EL1 are redirected to the VNCR despite having an effect 1865 1867 * at EL2. Use a big hammer to apply sanity. 1868 + * 1869 + * Unless of course we have FEAT_FGT, in which case we can precisely 1870 + * trap MDSCR_EL1. 1866 1871 */ 1867 - if (is_hyp_ctxt(vcpu)) 1872 + else if (!cpus_have_final_cap(ARM64_HAS_FGT)) 1868 1873 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA; 1869 - else 1870 - vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE); 1871 1874 }
+105 -26
arch/arm64/kvm/sys_regs.c
··· 203 203 MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL ); 204 204 MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL ); 205 205 MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL ); 206 - MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); 207 206 MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); 208 207 MAPPED_EL2_SYSREG(SCTLR2_EL2, SCTLR2_EL1, NULL ); 209 208 case CNTHCTL_EL2: ··· 1594 1595 return true; 1595 1596 } 1596 1597 1597 - static bool access_hv_timer(struct kvm_vcpu *vcpu, 1598 - struct sys_reg_params *p, 1599 - const struct sys_reg_desc *r) 1598 + static int arch_timer_set_user(struct kvm_vcpu *vcpu, 1599 + const struct sys_reg_desc *rd, 1600 + u64 val) 1600 1601 { 1601 - if (!vcpu_el2_e2h_is_set(vcpu)) 1602 - return undef_access(vcpu, p, r); 1602 + switch (reg_to_encoding(rd)) { 1603 + case SYS_CNTV_CTL_EL0: 1604 + case SYS_CNTP_CTL_EL0: 1605 + case SYS_CNTHV_CTL_EL2: 1606 + case SYS_CNTHP_CTL_EL2: 1607 + val &= ~ARCH_TIMER_CTRL_IT_STAT; 1608 + break; 1609 + case SYS_CNTVCT_EL0: 1610 + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) 1611 + timer_set_offset(vcpu_vtimer(vcpu), kvm_phys_timer_read() - val); 1612 + return 0; 1613 + case SYS_CNTPCT_EL0: 1614 + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) 1615 + timer_set_offset(vcpu_ptimer(vcpu), kvm_phys_timer_read() - val); 1616 + return 0; 1617 + } 1603 1618 1604 - return access_arch_timer(vcpu, p, r); 1619 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 1620 + return 0; 1621 + } 1622 + 1623 + static int arch_timer_get_user(struct kvm_vcpu *vcpu, 1624 + const struct sys_reg_desc *rd, 1625 + u64 *val) 1626 + { 1627 + switch (reg_to_encoding(rd)) { 1628 + case SYS_CNTVCT_EL0: 1629 + *val = kvm_phys_timer_read() - timer_get_offset(vcpu_vtimer(vcpu)); 1630 + break; 1631 + case SYS_CNTPCT_EL0: 1632 + *val = kvm_phys_timer_read() - timer_get_offset(vcpu_ptimer(vcpu)); 1633 + break; 1634 + default: 1635 + *val = __vcpu_sys_reg(vcpu, rd->reg); 1636 + } 1637 + 1638 + return 0; 1605 1639 } 1606 1640 1607 1641 static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, ··· 2539 2507 "trap of EL2 register redirected to EL1"); 2540 2508 } 2541 2509 2542 - #define EL2_REG_FILTERED(name, acc, rst, v, filter) { \ 2510 + #define SYS_REG_USER_FILTER(name, acc, rst, v, gu, su, filter) { \ 2543 2511 SYS_DESC(SYS_##name), \ 2544 2512 .access = acc, \ 2545 2513 .reset = rst, \ 2546 2514 .reg = name, \ 2515 + .get_user = gu, \ 2516 + .set_user = su, \ 2547 2517 .visibility = filter, \ 2548 2518 .val = v, \ 2549 2519 } 2520 + 2521 + #define EL2_REG_FILTERED(name, acc, rst, v, filter) \ 2522 + SYS_REG_USER_FILTER(name, acc, rst, v, NULL, NULL, filter) 2550 2523 2551 2524 #define EL2_REG(name, acc, rst, v) \ 2552 2525 EL2_REG_FILTERED(name, acc, rst, v, el2_visibility) ··· 2562 2525 #define EL2_REG_VNCR_GICv3(name) \ 2563 2526 EL2_REG_VNCR_FILT(name, hidden_visibility) 2564 2527 #define EL2_REG_REDIR(name, rst, v) EL2_REG(name, bad_redir_trap, rst, v) 2528 + 2529 + #define TIMER_REG(name, vis) \ 2530 + SYS_REG_USER_FILTER(name, access_arch_timer, reset_val, 0, \ 2531 + arch_timer_get_user, arch_timer_set_user, vis) 2565 2532 2566 2533 /* 2567 2534 * Since reset() callback and field val are not used for idregs, they will be ··· 2746 2705 2747 2706 if (guest_hyp_sve_traps_enabled(vcpu)) { 2748 2707 kvm_inject_nested_sve_trap(vcpu); 2749 - return true; 2708 + return false; 2750 2709 } 2751 2710 2752 2711 if (!p->is_write) { 2753 - p->regval = vcpu_read_sys_reg(vcpu, ZCR_EL2); 2712 + p->regval = __vcpu_sys_reg(vcpu, ZCR_EL2); 2754 2713 return true; 2755 2714 } 2756 2715 2757 2716 vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1; 2758 2717 vq = min(vq, vcpu_sve_max_vq(vcpu)); 2759 - vcpu_write_sys_reg(vcpu, vq - 1, ZCR_EL2); 2760 - 2718 + __vcpu_assign_sys_reg(vcpu, ZCR_EL2, vq - 1); 2761 2719 return true; 2762 2720 } 2763 2721 ··· 2871 2831 const struct sys_reg_desc *rd) 2872 2832 { 2873 2833 return __el2_visibility(vcpu, rd, s1pie_visibility); 2834 + } 2835 + 2836 + static unsigned int cnthv_visibility(const struct kvm_vcpu *vcpu, 2837 + const struct sys_reg_desc *rd) 2838 + { 2839 + if (vcpu_has_nv(vcpu) && 2840 + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2_E2H0)) 2841 + return 0; 2842 + 2843 + return REG_HIDDEN; 2874 2844 } 2875 2845 2876 2846 static bool access_mdcr(struct kvm_vcpu *vcpu, ··· 3532 3482 AMU_AMEVTYPER1_EL0(14), 3533 3483 AMU_AMEVTYPER1_EL0(15), 3534 3484 3535 - { SYS_DESC(SYS_CNTPCT_EL0), access_arch_timer }, 3536 - { SYS_DESC(SYS_CNTVCT_EL0), access_arch_timer }, 3485 + { SYS_DESC(SYS_CNTPCT_EL0), .access = access_arch_timer, 3486 + .get_user = arch_timer_get_user, .set_user = arch_timer_set_user }, 3487 + { SYS_DESC(SYS_CNTVCT_EL0), .access = access_arch_timer, 3488 + .get_user = arch_timer_get_user, .set_user = arch_timer_set_user }, 3537 3489 { SYS_DESC(SYS_CNTPCTSS_EL0), access_arch_timer }, 3538 3490 { SYS_DESC(SYS_CNTVCTSS_EL0), access_arch_timer }, 3539 3491 { SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer }, 3540 - { SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer }, 3541 - { SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer }, 3492 + TIMER_REG(CNTP_CTL_EL0, NULL), 3493 + TIMER_REG(CNTP_CVAL_EL0, NULL), 3542 3494 3543 3495 { SYS_DESC(SYS_CNTV_TVAL_EL0), access_arch_timer }, 3544 - { SYS_DESC(SYS_CNTV_CTL_EL0), access_arch_timer }, 3545 - { SYS_DESC(SYS_CNTV_CVAL_EL0), access_arch_timer }, 3496 + TIMER_REG(CNTV_CTL_EL0, NULL), 3497 + TIMER_REG(CNTV_CVAL_EL0, NULL), 3546 3498 3547 3499 /* PMEVCNTRn_EL0 */ 3548 3500 PMU_PMEVCNTR_EL0(0), ··· 3742 3690 EL2_REG_VNCR(CNTVOFF_EL2, reset_val, 0), 3743 3691 EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0), 3744 3692 { SYS_DESC(SYS_CNTHP_TVAL_EL2), access_arch_timer }, 3745 - EL2_REG(CNTHP_CTL_EL2, access_arch_timer, reset_val, 0), 3746 - EL2_REG(CNTHP_CVAL_EL2, access_arch_timer, reset_val, 0), 3693 + TIMER_REG(CNTHP_CTL_EL2, el2_visibility), 3694 + TIMER_REG(CNTHP_CVAL_EL2, el2_visibility), 3747 3695 3748 - { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_hv_timer }, 3749 - EL2_REG(CNTHV_CTL_EL2, access_hv_timer, reset_val, 0), 3750 - EL2_REG(CNTHV_CVAL_EL2, access_hv_timer, reset_val, 0), 3696 + { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_arch_timer, .visibility = cnthv_visibility }, 3697 + TIMER_REG(CNTHV_CTL_EL2, cnthv_visibility), 3698 + TIMER_REG(CNTHV_CVAL_EL2, cnthv_visibility), 3751 3699 3752 3700 { SYS_DESC(SYS_CNTKCTL_EL12), access_cntkctl_el12 }, 3753 3701 ··· 5285 5233 } 5286 5234 } 5287 5235 5236 + static u64 kvm_one_reg_to_id(const struct kvm_one_reg *reg) 5237 + { 5238 + switch(reg->id) { 5239 + case KVM_REG_ARM_TIMER_CVAL: 5240 + return TO_ARM64_SYS_REG(CNTV_CVAL_EL0); 5241 + case KVM_REG_ARM_TIMER_CNT: 5242 + return TO_ARM64_SYS_REG(CNTVCT_EL0); 5243 + default: 5244 + return reg->id; 5245 + } 5246 + } 5247 + 5288 5248 int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, 5289 5249 const struct sys_reg_desc table[], unsigned int num) 5290 5250 { 5291 5251 u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; 5292 5252 const struct sys_reg_desc *r; 5253 + u64 id = kvm_one_reg_to_id(reg); 5293 5254 u64 val; 5294 5255 int ret; 5295 5256 5296 - r = id_to_sys_reg_desc(vcpu, reg->id, table, num); 5257 + r = id_to_sys_reg_desc(vcpu, id, table, num); 5297 5258 if (!r || sysreg_hidden(vcpu, r)) 5298 5259 return -ENOENT; 5299 5260 ··· 5339 5274 { 5340 5275 u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; 5341 5276 const struct sys_reg_desc *r; 5277 + u64 id = kvm_one_reg_to_id(reg); 5342 5278 u64 val; 5343 5279 int ret; 5344 5280 5345 5281 if (get_user(val, uaddr)) 5346 5282 return -EFAULT; 5347 5283 5348 - r = id_to_sys_reg_desc(vcpu, reg->id, table, num); 5284 + r = id_to_sys_reg_desc(vcpu, id, table, num); 5349 5285 if (!r || sysreg_hidden(vcpu, r)) 5350 5286 return -ENOENT; 5351 5287 ··· 5406 5340 5407 5341 static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) 5408 5342 { 5343 + u64 idx; 5344 + 5409 5345 if (!*uind) 5410 5346 return true; 5411 5347 5412 - if (put_user(sys_reg_to_index(reg), *uind)) 5348 + switch (reg_to_encoding(reg)) { 5349 + case SYS_CNTV_CVAL_EL0: 5350 + idx = KVM_REG_ARM_TIMER_CVAL; 5351 + break; 5352 + case SYS_CNTVCT_EL0: 5353 + idx = KVM_REG_ARM_TIMER_CNT; 5354 + break; 5355 + default: 5356 + idx = sys_reg_to_index(reg); 5357 + } 5358 + 5359 + if (put_user(idx, *uind)) 5413 5360 return false; 5414 5361 5415 5362 (*uind)++;
+6
arch/arm64/kvm/sys_regs.h
··· 257 257 (val); \ 258 258 }) 259 259 260 + #define TO_ARM64_SYS_REG(r) ARM64_SYS_REG(sys_reg_Op0(SYS_ ## r), \ 261 + sys_reg_Op1(SYS_ ## r), \ 262 + sys_reg_CRn(SYS_ ## r), \ 263 + sys_reg_CRm(SYS_ ## r), \ 264 + sys_reg_Op2(SYS_ ## r)) 265 + 260 266 #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
+4 -1
arch/arm64/kvm/vgic/vgic-v3.c
··· 297 297 { 298 298 struct vgic_v3_cpu_if *vgic_v3 = &vcpu->arch.vgic_cpu.vgic_v3; 299 299 300 + if (!vgic_is_v3(vcpu->kvm)) 301 + return; 302 + 300 303 /* Hide GICv3 sysreg if necessary */ 301 - if (!kvm_has_gicv3(vcpu->kvm)) { 304 + if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) { 302 305 vgic_v3->vgic_hcr |= (ICH_HCR_EL2_TALL0 | ICH_HCR_EL2_TALL1 | 303 306 ICH_HCR_EL2_TC); 304 307 return;
+5 -3
arch/x86/kvm/pmu.c
··· 108 108 bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL; 109 109 int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS; 110 110 111 - perf_get_x86_pmu_capability(&kvm_host_pmu); 112 - 113 111 /* 114 112 * Hybrid PMUs don't play nice with virtualization without careful 115 113 * configuration by userspace, and KVM's APIs for reporting supported 116 114 * vPMU features do not account for hybrid PMUs. Disable vPMU support 117 115 * for hybrid PMUs until KVM gains a way to let userspace opt-in. 118 116 */ 119 - if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) 117 + if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) { 120 118 enable_pmu = false; 119 + memset(&kvm_host_pmu, 0, sizeof(kvm_host_pmu)); 120 + } else { 121 + perf_get_x86_pmu_capability(&kvm_host_pmu); 122 + } 121 123 122 124 if (enable_pmu) { 123 125 /*
+4 -3
arch/x86/kvm/x86.c
··· 13941 13941 13942 13942 #ifdef CONFIG_KVM_GUEST_MEMFD 13943 13943 /* 13944 - * KVM doesn't yet support mmap() on guest_memfd for VMs with private memory 13945 - * (the private vs. shared tracking needs to be moved into guest_memfd). 13944 + * KVM doesn't yet support initializing guest_memfd memory as shared for VMs 13945 + * with private memory (the private vs. shared tracking needs to be moved into 13946 + * guest_memfd). 13946 13947 */ 13947 - bool kvm_arch_supports_gmem_mmap(struct kvm *kvm) 13948 + bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm) 13948 13949 { 13949 13950 return !kvm_arch_has_private_mem(kvm); 13950 13951 }
+16 -8
include/kvm/arm_arch_timer.h
··· 51 51 }; 52 52 53 53 struct arch_timer_context { 54 - struct kvm_vcpu *vcpu; 55 - 56 54 /* Emulated Timer (may be unused) */ 57 55 struct hrtimer hrtimer; 58 56 u64 ns_frac; ··· 68 70 struct { 69 71 bool level; 70 72 } irq; 73 + 74 + /* Who am I? */ 75 + enum kvm_arch_timers timer_id; 71 76 72 77 /* Duplicated state from arch_timer.c for convenience */ 73 78 u32 host_timer_irq; ··· 107 106 108 107 void kvm_timer_init_vm(struct kvm *kvm); 109 108 110 - u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); 111 - int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); 112 - 113 109 int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); 114 110 int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); 115 111 int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); ··· 125 127 #define vcpu_hvtimer(v) (&(v)->arch.timer_cpu.timers[TIMER_HVTIMER]) 126 128 #define vcpu_hptimer(v) (&(v)->arch.timer_cpu.timers[TIMER_HPTIMER]) 127 129 128 - #define arch_timer_ctx_index(ctx) ((ctx) - vcpu_timer((ctx)->vcpu)->timers) 129 - 130 - #define timer_vm_data(ctx) (&(ctx)->vcpu->kvm->arch.timer_data) 130 + #define arch_timer_ctx_index(ctx) ((ctx)->timer_id) 131 + #define timer_context_to_vcpu(ctx) container_of((ctx), struct kvm_vcpu, arch.timer_cpu.timers[(ctx)->timer_id]) 132 + #define timer_vm_data(ctx) (&(timer_context_to_vcpu(ctx)->kvm->arch.timer_data)) 131 133 #define timer_irq(ctx) (timer_vm_data(ctx)->ppi[arch_timer_ctx_index(ctx)]) 132 134 133 135 u64 kvm_arm_timer_read_sysreg(struct kvm_vcpu *vcpu, ··· 174 176 offset += *ctxt->offset.vcpu_offset; 175 177 176 178 return offset; 179 + } 180 + 181 + static inline void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) 182 + { 183 + if (!ctxt->offset.vm_offset) { 184 + WARN(offset, "timer %d\n", arch_timer_ctx_index(ctxt)); 185 + return; 186 + } 187 + 188 + WRITE_ONCE(*ctxt->offset.vm_offset, offset); 177 189 } 178 190 179 191 #endif
+11 -1
include/linux/kvm_host.h
··· 729 729 #endif 730 730 731 731 #ifdef CONFIG_KVM_GUEST_MEMFD 732 - bool kvm_arch_supports_gmem_mmap(struct kvm *kvm); 732 + bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm); 733 + 734 + static inline u64 kvm_gmem_get_supported_flags(struct kvm *kvm) 735 + { 736 + u64 flags = GUEST_MEMFD_FLAG_MMAP; 737 + 738 + if (!kvm || kvm_arch_supports_gmem_init_shared(kvm)) 739 + flags |= GUEST_MEMFD_FLAG_INIT_SHARED; 740 + 741 + return flags; 742 + } 733 743 #endif 734 744 735 745 #ifndef kvm_arch_has_readonly_mem
+3 -2
include/uapi/linux/kvm.h
··· 962 962 #define KVM_CAP_ARM_EL2_E2H0 241 963 963 #define KVM_CAP_RISCV_MP_STATE_RESET 242 964 964 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 965 - #define KVM_CAP_GUEST_MEMFD_MMAP 244 965 + #define KVM_CAP_GUEST_MEMFD_FLAGS 244 966 966 967 967 struct kvm_irq_routing_irqchip { 968 968 __u32 irqchip; ··· 1599 1599 #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) 1600 1600 1601 1601 #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) 1602 - #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) 1602 + #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) 1603 + #define GUEST_MEMFD_FLAG_INIT_SHARED (1ULL << 1) 1603 1604 1604 1605 struct kvm_create_guest_memfd { 1605 1606 __u64 size;
+1 -1
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 1020 1020 { 1021 1021 const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600; 1022 1022 uint64_t freq = read_sysreg(CNTFRQ_EL0); 1023 - uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq); 1023 + int width = ilog2(MIN_ROLLOVER_SECS * freq); 1024 1024 1025 1025 width = clamp(width, 56, 64); 1026 1026 CVAL_MAX = GENMASK_ULL(width - 1, 0);
+43
tools/testing/selftests/kvm/arm64/external_aborts.c
··· 359 359 kvm_vm_free(vm); 360 360 } 361 361 362 + static void test_serror_amo_guest(void) 363 + { 364 + /* 365 + * The ISB is entirely unnecessary (and highlights how FEAT_NV2 is borked) 366 + * since the write is redirected to memory. But don't write (intentionally) 367 + * broken code! 368 + */ 369 + sysreg_clear_set(hcr_el2, HCR_EL2_AMO | HCR_EL2_TGE, 0); 370 + isb(); 371 + 372 + GUEST_SYNC(0); 373 + GUEST_ASSERT(read_sysreg(isr_el1) & ISR_EL1_A); 374 + 375 + /* 376 + * KVM treats the effective value of AMO as 1 when 377 + * HCR_EL2.{E2H,TGE} = {1, 0}, meaning the SError will be taken when 378 + * unmasked. 379 + */ 380 + local_serror_enable(); 381 + isb(); 382 + local_serror_disable(); 383 + 384 + GUEST_FAIL("Should've taken pending SError exception"); 385 + } 386 + 387 + static void test_serror_amo(void) 388 + { 389 + struct kvm_vcpu *vcpu; 390 + struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_serror_amo_guest, 391 + unexpected_dabt_handler); 392 + 393 + vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_serror_handler); 394 + vcpu_run_expect_sync(vcpu); 395 + vcpu_inject_serror(vcpu); 396 + vcpu_run_expect_done(vcpu); 397 + kvm_vm_free(vm); 398 + } 399 + 362 400 int main(void) 363 401 { 364 402 test_mmio_abort(); ··· 407 369 test_serror_emulated(); 408 370 test_mmio_ease(); 409 371 test_s1ptw_abort(); 372 + 373 + if (!test_supports_el2()) 374 + return 0; 375 + 376 + test_serror_amo(); 410 377 }
+96 -3
tools/testing/selftests/kvm/arm64/get-reg-list.c
··· 65 65 REG_FEAT(SCTLR2_EL1, ID_AA64MMFR3_EL1, SCTLRX, IMP), 66 66 REG_FEAT(VDISR_EL2, ID_AA64PFR0_EL1, RAS, IMP), 67 67 REG_FEAT(VSESR_EL2, ID_AA64PFR0_EL1, RAS, IMP), 68 + REG_FEAT(VNCR_EL2, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY), 69 + REG_FEAT(CNTHV_CTL_EL2, ID_AA64MMFR1_EL1, VH, IMP), 70 + REG_FEAT(CNTHV_CVAL_EL2,ID_AA64MMFR1_EL1, VH, IMP), 68 71 }; 69 72 70 73 bool filter_reg(__u64 reg) ··· 348 345 KVM_REG_ARM_FW_FEAT_BMAP_REG(1), /* KVM_REG_ARM_STD_HYP_BMAP */ 349 346 KVM_REG_ARM_FW_FEAT_BMAP_REG(2), /* KVM_REG_ARM_VENDOR_HYP_BMAP */ 350 347 KVM_REG_ARM_FW_FEAT_BMAP_REG(3), /* KVM_REG_ARM_VENDOR_HYP_BMAP_2 */ 351 - ARM64_SYS_REG(3, 3, 14, 3, 1), /* CNTV_CTL_EL0 */ 352 - ARM64_SYS_REG(3, 3, 14, 3, 2), /* CNTV_CVAL_EL0 */ 353 - ARM64_SYS_REG(3, 3, 14, 0, 2), 348 + 349 + /* 350 + * EL0 Virtual Timer Registers 351 + * 352 + * WARNING: 353 + * KVM_REG_ARM_TIMER_CVAL and KVM_REG_ARM_TIMER_CNT are not defined 354 + * with the appropriate register encodings. Their values have been 355 + * accidentally swapped. As this is set API, the definitions here 356 + * must be used, rather than ones derived from the encodings. 357 + */ 358 + KVM_ARM64_SYS_REG(SYS_CNTV_CTL_EL0), 359 + KVM_REG_ARM_TIMER_CVAL, 360 + KVM_REG_ARM_TIMER_CNT, 361 + 354 362 ARM64_SYS_REG(3, 0, 0, 0, 0), /* MIDR_EL1 */ 355 363 ARM64_SYS_REG(3, 0, 0, 0, 6), /* REVIDR_EL1 */ 356 364 ARM64_SYS_REG(3, 1, 0, 0, 1), /* CLIDR_EL1 */ ··· 769 755 SYS_REG(VSESR_EL2), 770 756 }; 771 757 758 + static __u64 el2_e2h0_regs[] = { 759 + /* Empty */ 760 + }; 761 + 772 762 #define BASE_SUBLIST \ 773 763 { "base", .regs = base_regs, .regs_n = ARRAY_SIZE(base_regs), } 774 764 #define VREGS_SUBLIST \ ··· 806 788 .feature = KVM_ARM_VCPU_HAS_EL2, \ 807 789 .regs = el2_regs, \ 808 790 .regs_n = ARRAY_SIZE(el2_regs), \ 791 + } 792 + #define EL2_E2H0_SUBLIST \ 793 + EL2_SUBLIST, \ 794 + { \ 795 + .name = "EL2 E2H0", \ 796 + .capability = KVM_CAP_ARM_EL2_E2H0, \ 797 + .feature = KVM_ARM_VCPU_HAS_EL2_E2H0, \ 798 + .regs = el2_e2h0_regs, \ 799 + .regs_n = ARRAY_SIZE(el2_e2h0_regs), \ 809 800 } 810 801 811 802 static struct vcpu_reg_list vregs_config = { ··· 924 897 }, 925 898 }; 926 899 900 + static struct vcpu_reg_list el2_e2h0_vregs_config = { 901 + .sublists = { 902 + BASE_SUBLIST, 903 + EL2_E2H0_SUBLIST, 904 + VREGS_SUBLIST, 905 + {0}, 906 + }, 907 + }; 908 + 909 + static struct vcpu_reg_list el2_e2h0_vregs_pmu_config = { 910 + .sublists = { 911 + BASE_SUBLIST, 912 + EL2_E2H0_SUBLIST, 913 + VREGS_SUBLIST, 914 + PMU_SUBLIST, 915 + {0}, 916 + }, 917 + }; 918 + 919 + static struct vcpu_reg_list el2_e2h0_sve_config = { 920 + .sublists = { 921 + BASE_SUBLIST, 922 + EL2_E2H0_SUBLIST, 923 + SVE_SUBLIST, 924 + {0}, 925 + }, 926 + }; 927 + 928 + static struct vcpu_reg_list el2_e2h0_sve_pmu_config = { 929 + .sublists = { 930 + BASE_SUBLIST, 931 + EL2_E2H0_SUBLIST, 932 + SVE_SUBLIST, 933 + PMU_SUBLIST, 934 + {0}, 935 + }, 936 + }; 937 + 938 + static struct vcpu_reg_list el2_e2h0_pauth_config = { 939 + .sublists = { 940 + BASE_SUBLIST, 941 + EL2_E2H0_SUBLIST, 942 + VREGS_SUBLIST, 943 + PAUTH_SUBLIST, 944 + {0}, 945 + }, 946 + }; 947 + 948 + static struct vcpu_reg_list el2_e2h0_pauth_pmu_config = { 949 + .sublists = { 950 + BASE_SUBLIST, 951 + EL2_E2H0_SUBLIST, 952 + VREGS_SUBLIST, 953 + PAUTH_SUBLIST, 954 + PMU_SUBLIST, 955 + {0}, 956 + }, 957 + }; 958 + 927 959 struct vcpu_reg_list *vcpu_configs[] = { 928 960 &vregs_config, 929 961 &vregs_pmu_config, ··· 997 911 &el2_sve_pmu_config, 998 912 &el2_pauth_config, 999 913 &el2_pauth_pmu_config, 914 + 915 + &el2_e2h0_vregs_config, 916 + &el2_e2h0_vregs_pmu_config, 917 + &el2_e2h0_sve_config, 918 + &el2_e2h0_sve_pmu_config, 919 + &el2_e2h0_pauth_config, 920 + &el2_e2h0_pauth_pmu_config, 1000 921 }; 1001 922 int vcpu_configs_n = ARRAY_SIZE(vcpu_configs);
+3
tools/testing/selftests/kvm/arm64/set_id_regs.c
··· 249 249 GUEST_REG_SYNC(SYS_ID_AA64ISAR2_EL1); 250 250 GUEST_REG_SYNC(SYS_ID_AA64ISAR3_EL1); 251 251 GUEST_REG_SYNC(SYS_ID_AA64PFR0_EL1); 252 + GUEST_REG_SYNC(SYS_ID_AA64PFR1_EL1); 252 253 GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); 253 254 GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); 254 255 GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); 255 256 GUEST_REG_SYNC(SYS_ID_AA64MMFR3_EL1); 256 257 GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); 258 + GUEST_REG_SYNC(SYS_MPIDR_EL1); 259 + GUEST_REG_SYNC(SYS_CLIDR_EL1); 257 260 GUEST_REG_SYNC(SYS_CTR_EL0); 258 261 GUEST_REG_SYNC(SYS_MIDR_EL1); 259 262 GUEST_REG_SYNC(SYS_REVIDR_EL1);
+2 -1
tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
··· 123 123 static void guest_code(size_t nr_lpis) 124 124 { 125 125 guest_setup_gic(); 126 + local_irq_enable(); 126 127 127 128 GUEST_SYNC(0); 128 129 ··· 332 331 { 333 332 int i; 334 333 335 - vcpus = malloc(test_data.nr_cpus * sizeof(struct kvm_vcpu)); 334 + vcpus = malloc(test_data.nr_cpus * sizeof(struct kvm_vcpu *)); 336 335 TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); 337 336 338 337 vm = vm_create_with_vcpus(test_data.nr_cpus, guest_code, vcpus);
+92 -79
tools/testing/selftests/kvm/guest_memfd_test.c
··· 14 14 #include <linux/bitmap.h> 15 15 #include <linux/falloc.h> 16 16 #include <linux/sizes.h> 17 - #include <setjmp.h> 18 - #include <signal.h> 19 17 #include <sys/mman.h> 20 18 #include <sys/types.h> 21 19 #include <sys/stat.h> ··· 22 24 #include "test_util.h" 23 25 #include "ucall_common.h" 24 26 25 - static void test_file_read_write(int fd) 27 + static size_t page_size; 28 + 29 + static void test_file_read_write(int fd, size_t total_size) 26 30 { 27 31 char buf[64]; 28 32 ··· 38 38 "pwrite on a guest_mem fd should fail"); 39 39 } 40 40 41 - static void test_mmap_supported(int fd, size_t page_size, size_t total_size) 41 + static void test_mmap_cow(int fd, size_t size) 42 + { 43 + void *mem; 44 + 45 + mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); 46 + TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); 47 + } 48 + 49 + static void test_mmap_supported(int fd, size_t total_size) 42 50 { 43 51 const char val = 0xaa; 44 52 char *mem; 45 53 size_t i; 46 54 int ret; 47 55 48 - mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); 49 - TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); 50 - 51 - mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 52 - TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); 56 + mem = kvm_mmap(total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 53 57 54 58 memset(mem, val, total_size); 55 59 for (i = 0; i < total_size; i++) ··· 72 68 for (i = 0; i < total_size; i++) 73 69 TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); 74 70 75 - ret = munmap(mem, total_size); 76 - TEST_ASSERT(!ret, "munmap() should succeed."); 71 + kvm_munmap(mem, total_size); 77 72 } 78 73 79 - static sigjmp_buf jmpbuf; 80 - void fault_sigbus_handler(int signum) 74 + static void test_fault_sigbus(int fd, size_t accessible_size, size_t map_size) 81 75 { 82 - siglongjmp(jmpbuf, 1); 83 - } 84 - 85 - static void test_fault_overflow(int fd, size_t page_size, size_t total_size) 86 - { 87 - struct sigaction sa_old, sa_new = { 88 - .sa_handler = fault_sigbus_handler, 89 - }; 90 - size_t map_size = total_size * 4; 91 76 const char val = 0xaa; 92 77 char *mem; 93 78 size_t i; 94 - int ret; 95 79 96 - mem = mmap(NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 97 - TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); 80 + mem = kvm_mmap(map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 98 81 99 - sigaction(SIGBUS, &sa_new, &sa_old); 100 - if (sigsetjmp(jmpbuf, 1) == 0) { 101 - memset(mem, 0xaa, map_size); 102 - TEST_ASSERT(false, "memset() should have triggered SIGBUS."); 103 - } 104 - sigaction(SIGBUS, &sa_old, NULL); 82 + TEST_EXPECT_SIGBUS(memset(mem, val, map_size)); 83 + TEST_EXPECT_SIGBUS((void)READ_ONCE(mem[accessible_size])); 105 84 106 - for (i = 0; i < total_size; i++) 85 + for (i = 0; i < accessible_size; i++) 107 86 TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); 108 87 109 - ret = munmap(mem, map_size); 110 - TEST_ASSERT(!ret, "munmap() should succeed."); 88 + kvm_munmap(mem, map_size); 111 89 } 112 90 113 - static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size) 91 + static void test_fault_overflow(int fd, size_t total_size) 92 + { 93 + test_fault_sigbus(fd, total_size, total_size * 4); 94 + } 95 + 96 + static void test_fault_private(int fd, size_t total_size) 97 + { 98 + test_fault_sigbus(fd, 0, total_size); 99 + } 100 + 101 + static void test_mmap_not_supported(int fd, size_t total_size) 114 102 { 115 103 char *mem; 116 104 ··· 113 117 TEST_ASSERT_EQ(mem, MAP_FAILED); 114 118 } 115 119 116 - static void test_file_size(int fd, size_t page_size, size_t total_size) 120 + static void test_file_size(int fd, size_t total_size) 117 121 { 118 122 struct stat sb; 119 123 int ret; ··· 124 128 TEST_ASSERT_EQ(sb.st_blksize, page_size); 125 129 } 126 130 127 - static void test_fallocate(int fd, size_t page_size, size_t total_size) 131 + static void test_fallocate(int fd, size_t total_size) 128 132 { 129 133 int ret; 130 134 ··· 161 165 TEST_ASSERT(!ret, "fallocate to restore punched hole should succeed"); 162 166 } 163 167 164 - static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) 168 + static void test_invalid_punch_hole(int fd, size_t total_size) 165 169 { 166 170 struct { 167 171 off_t offset; ··· 192 196 } 193 197 194 198 static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, 195 - uint64_t guest_memfd_flags, 196 - size_t page_size) 199 + uint64_t guest_memfd_flags) 197 200 { 198 201 size_t size; 199 202 int fd; ··· 209 214 { 210 215 int fd1, fd2, ret; 211 216 struct stat st1, st2; 212 - size_t page_size = getpagesize(); 213 217 214 218 fd1 = __vm_create_guest_memfd(vm, page_size, 0); 215 219 TEST_ASSERT(fd1 != -1, "memfd creation should succeed"); ··· 233 239 close(fd1); 234 240 } 235 241 236 - static void test_guest_memfd_flags(struct kvm_vm *vm, uint64_t valid_flags) 242 + static void test_guest_memfd_flags(struct kvm_vm *vm) 237 243 { 238 - size_t page_size = getpagesize(); 244 + uint64_t valid_flags = vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS); 239 245 uint64_t flag; 240 246 int fd; 241 247 ··· 254 260 } 255 261 } 256 262 257 - static void test_guest_memfd(unsigned long vm_type) 263 + #define gmem_test(__test, __vm, __flags) \ 264 + do { \ 265 + int fd = vm_create_guest_memfd(__vm, page_size * 4, __flags); \ 266 + \ 267 + test_##__test(fd, page_size * 4); \ 268 + close(fd); \ 269 + } while (0) 270 + 271 + static void __test_guest_memfd(struct kvm_vm *vm, uint64_t flags) 258 272 { 259 - uint64_t flags = 0; 260 - struct kvm_vm *vm; 261 - size_t total_size; 262 - size_t page_size; 263 - int fd; 264 - 265 - page_size = getpagesize(); 266 - total_size = page_size * 4; 267 - 268 - vm = vm_create_barebones_type(vm_type); 269 - 270 - if (vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP)) 271 - flags |= GUEST_MEMFD_FLAG_MMAP; 272 - 273 273 test_create_guest_memfd_multiple(vm); 274 - test_create_guest_memfd_invalid_sizes(vm, flags, page_size); 274 + test_create_guest_memfd_invalid_sizes(vm, flags); 275 275 276 - fd = vm_create_guest_memfd(vm, total_size, flags); 277 - 278 - test_file_read_write(fd); 276 + gmem_test(file_read_write, vm, flags); 279 277 280 278 if (flags & GUEST_MEMFD_FLAG_MMAP) { 281 - test_mmap_supported(fd, page_size, total_size); 282 - test_fault_overflow(fd, page_size, total_size); 279 + if (flags & GUEST_MEMFD_FLAG_INIT_SHARED) { 280 + gmem_test(mmap_supported, vm, flags); 281 + gmem_test(fault_overflow, vm, flags); 282 + } else { 283 + gmem_test(fault_private, vm, flags); 284 + } 285 + 286 + gmem_test(mmap_cow, vm, flags); 283 287 } else { 284 - test_mmap_not_supported(fd, page_size, total_size); 288 + gmem_test(mmap_not_supported, vm, flags); 285 289 } 286 290 287 - test_file_size(fd, page_size, total_size); 288 - test_fallocate(fd, page_size, total_size); 289 - test_invalid_punch_hole(fd, page_size, total_size); 291 + gmem_test(file_size, vm, flags); 292 + gmem_test(fallocate, vm, flags); 293 + gmem_test(invalid_punch_hole, vm, flags); 294 + } 290 295 291 - test_guest_memfd_flags(vm, flags); 296 + static void test_guest_memfd(unsigned long vm_type) 297 + { 298 + struct kvm_vm *vm = vm_create_barebones_type(vm_type); 299 + uint64_t flags; 292 300 293 - close(fd); 301 + test_guest_memfd_flags(vm); 302 + 303 + __test_guest_memfd(vm, 0); 304 + 305 + flags = vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS); 306 + if (flags & GUEST_MEMFD_FLAG_MMAP) 307 + __test_guest_memfd(vm, GUEST_MEMFD_FLAG_MMAP); 308 + 309 + /* MMAP should always be supported if INIT_SHARED is supported. */ 310 + if (flags & GUEST_MEMFD_FLAG_INIT_SHARED) 311 + __test_guest_memfd(vm, GUEST_MEMFD_FLAG_MMAP | 312 + GUEST_MEMFD_FLAG_INIT_SHARED); 313 + 294 314 kvm_vm_free(vm); 295 315 } 296 316 ··· 336 328 size_t size; 337 329 int fd, i; 338 330 339 - if (!kvm_has_cap(KVM_CAP_GUEST_MEMFD_MMAP)) 331 + if (!kvm_check_cap(KVM_CAP_GUEST_MEMFD_FLAGS)) 340 332 return; 341 333 342 334 vm = __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, &vcpu, 1, guest_code); 343 335 344 - TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP), 345 - "Default VM type should always support guest_memfd mmap()"); 336 + TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_MMAP, 337 + "Default VM type should support MMAP, supported flags = 0x%x", 338 + vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS)); 339 + TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_INIT_SHARED, 340 + "Default VM type should support INIT_SHARED, supported flags = 0x%x", 341 + vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS)); 346 342 347 343 size = vm->page_size; 348 - fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP); 344 + fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP | 345 + GUEST_MEMFD_FLAG_INIT_SHARED); 349 346 vm_set_user_memory_region2(vm, slot, KVM_MEM_GUEST_MEMFD, gpa, size, NULL, fd, 0); 350 347 351 - mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 352 - TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed"); 348 + mem = kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 353 349 memset(mem, 0xaa, size); 354 - munmap(mem, size); 350 + kvm_munmap(mem, size); 355 351 356 352 virt_pg_map(vm, gpa, gpa); 357 353 vcpu_args_set(vcpu, 2, gpa, size); ··· 363 351 364 352 TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE); 365 353 366 - mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 367 - TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed"); 354 + mem = kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 368 355 for (i = 0; i < size; i++) 369 356 TEST_ASSERT_EQ(mem[i], 0xff); 370 357 ··· 376 365 unsigned long vm_types, vm_type; 377 366 378 367 TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); 368 + 369 + page_size = getpagesize(); 379 370 380 371 /* 381 372 * Not all architectures support KVM_CAP_VM_TYPES. However, those that
+11 -1
tools/testing/selftests/kvm/include/arm64/processor.h
··· 305 305 void test_disable_default_vgic(void); 306 306 307 307 bool vm_supports_el2(struct kvm_vm *vm); 308 - static bool vcpu_has_el2(struct kvm_vcpu *vcpu) 308 + 309 + static inline bool test_supports_el2(void) 310 + { 311 + struct kvm_vm *vm = vm_create(1); 312 + bool supported = vm_supports_el2(vm); 313 + 314 + kvm_vm_free(vm); 315 + return supported; 316 + } 317 + 318 + static inline bool vcpu_has_el2(struct kvm_vcpu *vcpu) 309 319 { 310 320 return vcpu->init.features[0] & BIT(KVM_ARM_VCPU_HAS_EL2); 311 321 }
+27
tools/testing/selftests/kvm/include/kvm_util.h
··· 286 286 #define __KVM_SYSCALL_ERROR(_name, _ret) \ 287 287 "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno) 288 288 289 + static inline void *__kvm_mmap(size_t size, int prot, int flags, int fd, 290 + off_t offset) 291 + { 292 + void *mem; 293 + 294 + mem = mmap(NULL, size, prot, flags, fd, offset); 295 + TEST_ASSERT(mem != MAP_FAILED, __KVM_SYSCALL_ERROR("mmap()", 296 + (int)(unsigned long)MAP_FAILED)); 297 + 298 + return mem; 299 + } 300 + 301 + static inline void *kvm_mmap(size_t size, int prot, int flags, int fd) 302 + { 303 + return __kvm_mmap(size, prot, flags, fd, 0); 304 + } 305 + 306 + static inline void kvm_munmap(void *mem, size_t size) 307 + { 308 + int ret; 309 + 310 + ret = munmap(mem, size); 311 + TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 312 + } 313 + 289 314 /* 290 315 * Use the "inner", double-underscore macro when reporting errors from within 291 316 * other macros so that the name of ioctl() and not its literal numeric value ··· 1297 1272 bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr); 1298 1273 1299 1274 uint32_t guest_get_vcpuid(void); 1275 + 1276 + bool kvm_arch_has_default_irqchip(void); 1300 1277 1301 1278 #endif /* SELFTEST_KVM_UTIL_H */
+19
tools/testing/selftests/kvm/include/test_util.h
··· 8 8 #ifndef SELFTEST_KVM_TEST_UTIL_H 9 9 #define SELFTEST_KVM_TEST_UTIL_H 10 10 11 + #include <setjmp.h> 12 + #include <signal.h> 11 13 #include <stdlib.h> 12 14 #include <stdarg.h> 13 15 #include <stdbool.h> ··· 78 76 #define TEST_FAIL(fmt, ...) do { \ 79 77 TEST_ASSERT(false, fmt, ##__VA_ARGS__); \ 80 78 __builtin_unreachable(); \ 79 + } while (0) 80 + 81 + extern sigjmp_buf expect_sigbus_jmpbuf; 82 + void expect_sigbus_handler(int signum); 83 + 84 + #define TEST_EXPECT_SIGBUS(action) \ 85 + do { \ 86 + struct sigaction sa_old, sa_new = { \ 87 + .sa_handler = expect_sigbus_handler, \ 88 + }; \ 89 + \ 90 + sigaction(SIGBUS, &sa_new, &sa_old); \ 91 + if (sigsetjmp(expect_sigbus_jmpbuf, 1) == 0) { \ 92 + action; \ 93 + TEST_FAIL("'%s' should have triggered SIGBUS", #action); \ 94 + } \ 95 + sigaction(SIGBUS, &sa_old, NULL); \ 81 96 } while (0) 82 97 83 98 size_t parse_size(const char *size);
+11 -3
tools/testing/selftests/kvm/irqfd_test.c
··· 89 89 int main(int argc, char *argv[]) 90 90 { 91 91 pthread_t racing_thread; 92 + struct kvm_vcpu *unused; 92 93 int r, i; 93 94 94 - /* Create "full" VMs, as KVM_IRQFD requires an in-kernel IRQ chip. */ 95 - vm1 = vm_create(1); 96 - vm2 = vm_create(1); 95 + TEST_REQUIRE(kvm_arch_has_default_irqchip()); 96 + 97 + /* 98 + * Create "full" VMs, as KVM_IRQFD requires an in-kernel IRQ chip. Also 99 + * create an unused vCPU as certain architectures (like arm64) need to 100 + * complete IRQ chip initialization after all possible vCPUs for a VM 101 + * have been created. 102 + */ 103 + vm1 = vm_create_with_one_vcpu(&unused, NULL); 104 + vm2 = vm_create_with_one_vcpu(&unused, NULL); 97 105 98 106 WRITE_ONCE(__eventfd, kvm_new_eventfd()); 99 107
+5
tools/testing/selftests/kvm/lib/arm64/processor.c
··· 725 725 if (vm->arch.has_gic) 726 726 close(vm->arch.gic_fd); 727 727 } 728 + 729 + bool kvm_arch_has_default_irqchip(void) 730 + { 731 + return request_vgic && kvm_supports_vgic_v3(); 732 + }
+20 -29
tools/testing/selftests/kvm/lib/kvm_util.c
··· 741 741 int ret; 742 742 743 743 if (vcpu->dirty_gfns) { 744 - ret = munmap(vcpu->dirty_gfns, vm->dirty_ring_size); 745 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 744 + kvm_munmap(vcpu->dirty_gfns, vm->dirty_ring_size); 746 745 vcpu->dirty_gfns = NULL; 747 746 } 748 747 749 - ret = munmap(vcpu->run, vcpu_mmap_sz()); 750 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 748 + kvm_munmap(vcpu->run, vcpu_mmap_sz()); 751 749 752 750 ret = close(vcpu->fd); 753 751 TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("close()", ret)); ··· 781 783 static void __vm_mem_region_delete(struct kvm_vm *vm, 782 784 struct userspace_mem_region *region) 783 785 { 784 - int ret; 785 - 786 786 rb_erase(&region->gpa_node, &vm->regions.gpa_tree); 787 787 rb_erase(&region->hva_node, &vm->regions.hva_tree); 788 788 hash_del(&region->slot_node); 789 789 790 790 sparsebit_free(&region->unused_phy_pages); 791 791 sparsebit_free(&region->protected_phy_pages); 792 - ret = munmap(region->mmap_start, region->mmap_size); 793 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 792 + kvm_munmap(region->mmap_start, region->mmap_size); 794 793 if (region->fd >= 0) { 795 794 /* There's an extra map when using shared memory. */ 796 - ret = munmap(region->mmap_alias, region->mmap_size); 797 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 795 + kvm_munmap(region->mmap_alias, region->mmap_size); 798 796 close(region->fd); 799 797 } 800 798 if (region->region.guest_memfd >= 0) ··· 1047 1053 region->fd = kvm_memfd_alloc(region->mmap_size, 1048 1054 src_type == VM_MEM_SRC_SHARED_HUGETLB); 1049 1055 1050 - region->mmap_start = mmap(NULL, region->mmap_size, 1051 - PROT_READ | PROT_WRITE, 1052 - vm_mem_backing_src_alias(src_type)->flag, 1053 - region->fd, 0); 1054 - TEST_ASSERT(region->mmap_start != MAP_FAILED, 1055 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1056 + region->mmap_start = kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, 1057 + vm_mem_backing_src_alias(src_type)->flag, 1058 + region->fd); 1056 1059 1057 1060 TEST_ASSERT(!is_backing_src_hugetlb(src_type) || 1058 1061 region->mmap_start == align_ptr_up(region->mmap_start, backing_src_pagesz), ··· 1120 1129 1121 1130 /* If shared memory, create an alias. */ 1122 1131 if (region->fd >= 0) { 1123 - region->mmap_alias = mmap(NULL, region->mmap_size, 1124 - PROT_READ | PROT_WRITE, 1125 - vm_mem_backing_src_alias(src_type)->flag, 1126 - region->fd, 0); 1127 - TEST_ASSERT(region->mmap_alias != MAP_FAILED, 1128 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1132 + region->mmap_alias = kvm_mmap(region->mmap_size, 1133 + PROT_READ | PROT_WRITE, 1134 + vm_mem_backing_src_alias(src_type)->flag, 1135 + region->fd); 1129 1136 1130 1137 /* Align host alias address */ 1131 1138 region->host_alias = align_ptr_up(region->mmap_alias, alignment); ··· 1333 1344 TEST_ASSERT(vcpu_mmap_sz() >= sizeof(*vcpu->run), "vcpu mmap size " 1334 1345 "smaller than expected, vcpu_mmap_sz: %zi expected_min: %zi", 1335 1346 vcpu_mmap_sz(), sizeof(*vcpu->run)); 1336 - vcpu->run = (struct kvm_run *) mmap(NULL, vcpu_mmap_sz(), 1337 - PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 0); 1338 - TEST_ASSERT(vcpu->run != MAP_FAILED, 1339 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1347 + vcpu->run = kvm_mmap(vcpu_mmap_sz(), PROT_READ | PROT_WRITE, 1348 + MAP_SHARED, vcpu->fd); 1340 1349 1341 1350 if (kvm_has_cap(KVM_CAP_BINARY_STATS_FD)) 1342 1351 vcpu->stats.fd = vcpu_get_stats_fd(vcpu); ··· 1781 1794 page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1782 1795 TEST_ASSERT(addr == MAP_FAILED, "Dirty ring mapped exec"); 1783 1796 1784 - addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 1785 - page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1786 - TEST_ASSERT(addr != MAP_FAILED, "Dirty ring map failed"); 1797 + addr = __kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 1798 + page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1787 1799 1788 1800 vcpu->dirty_gfns = addr; 1789 1801 vcpu->dirty_gfns_count = size / sizeof(struct kvm_dirty_gfn); ··· 2329 2343 2330 2344 pg = paddr >> vm->page_shift; 2331 2345 return sparsebit_is_set(region->protected_phy_pages, pg); 2346 + } 2347 + 2348 + __weak bool kvm_arch_has_default_irqchip(void) 2349 + { 2350 + return false; 2332 2351 }
+5
tools/testing/selftests/kvm/lib/s390/processor.c
··· 221 221 void assert_on_unhandled_exception(struct kvm_vcpu *vcpu) 222 222 { 223 223 } 224 + 225 + bool kvm_arch_has_default_irqchip(void) 226 + { 227 + return true; 228 + }
+7
tools/testing/selftests/kvm/lib/test_util.c
··· 18 18 19 19 #include "test_util.h" 20 20 21 + sigjmp_buf expect_sigbus_jmpbuf; 22 + 23 + void __attribute__((used)) expect_sigbus_handler(int signum) 24 + { 25 + siglongjmp(expect_sigbus_jmpbuf, 1); 26 + } 27 + 21 28 /* 22 29 * Random number generator that is usable from guest code. This is the 23 30 * Park-Miller LCG using standard constants.
+5
tools/testing/selftests/kvm/lib/x86/processor.c
··· 1318 1318 1319 1319 return ret; 1320 1320 } 1321 + 1322 + bool kvm_arch_has_default_irqchip(void) 1323 + { 1324 + return true; 1325 + }
+2 -3
tools/testing/selftests/kvm/mmu_stress_test.c
··· 339 339 TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb "); 340 340 341 341 fd = kvm_memfd_alloc(slot_size, hugepages); 342 - mem = mmap(NULL, slot_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 343 - TEST_ASSERT(mem != MAP_FAILED, "mmap() failed"); 342 + mem = kvm_mmap(slot_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 344 343 345 344 TEST_ASSERT(!madvise(mem, slot_size, MADV_NOHUGEPAGE), "madvise() failed"); 346 345 ··· 412 413 for (slot = (slot - 1) & ~1ull; slot >= first_slot; slot -= 2) 413 414 vm_set_user_memory_region(vm, slot, 0, 0, 0, NULL); 414 415 415 - munmap(mem, slot_size / 2); 416 + kvm_munmap(mem, slot_size / 2); 416 417 417 418 /* Sanity check that the vCPUs actually ran. */ 418 419 for (i = 0; i < nr_vcpus; i++)
+114 -17
tools/testing/selftests/kvm/pre_fault_memory_test.c
··· 10 10 #include <test_util.h> 11 11 #include <kvm_util.h> 12 12 #include <processor.h> 13 + #include <pthread.h> 13 14 14 15 /* Arbitrarily chosen values */ 15 16 #define TEST_SIZE (SZ_2M + PAGE_SIZE) ··· 31 30 GUEST_DONE(); 32 31 } 33 32 34 - static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 gpa, u64 size, 35 - u64 left) 33 + struct slot_worker_data { 34 + struct kvm_vm *vm; 35 + u64 gpa; 36 + uint32_t flags; 37 + bool worker_ready; 38 + bool prefault_ready; 39 + bool recreate_slot; 40 + }; 41 + 42 + static void *delete_slot_worker(void *__data) 43 + { 44 + struct slot_worker_data *data = __data; 45 + struct kvm_vm *vm = data->vm; 46 + 47 + WRITE_ONCE(data->worker_ready, true); 48 + 49 + while (!READ_ONCE(data->prefault_ready)) 50 + cpu_relax(); 51 + 52 + vm_mem_region_delete(vm, TEST_SLOT); 53 + 54 + while (!READ_ONCE(data->recreate_slot)) 55 + cpu_relax(); 56 + 57 + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, data->gpa, 58 + TEST_SLOT, TEST_NPAGES, data->flags); 59 + 60 + return NULL; 61 + } 62 + 63 + static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset, 64 + u64 size, u64 expected_left, bool private) 36 65 { 37 66 struct kvm_pre_fault_memory range = { 38 - .gpa = gpa, 67 + .gpa = base_gpa + offset, 39 68 .size = size, 40 69 .flags = 0, 41 70 }; 42 - u64 prev; 71 + struct slot_worker_data data = { 72 + .vm = vcpu->vm, 73 + .gpa = base_gpa, 74 + .flags = private ? KVM_MEM_GUEST_MEMFD : 0, 75 + }; 76 + bool slot_recreated = false; 77 + pthread_t slot_worker; 43 78 int ret, save_errno; 79 + u64 prev; 44 80 45 - do { 81 + /* 82 + * Concurrently delete (and recreate) the slot to test KVM's handling 83 + * of a racing memslot deletion with prefaulting. 84 + */ 85 + pthread_create(&slot_worker, NULL, delete_slot_worker, &data); 86 + 87 + while (!READ_ONCE(data.worker_ready)) 88 + cpu_relax(); 89 + 90 + WRITE_ONCE(data.prefault_ready, true); 91 + 92 + for (;;) { 46 93 prev = range.size; 47 94 ret = __vcpu_ioctl(vcpu, KVM_PRE_FAULT_MEMORY, &range); 48 95 save_errno = errno; ··· 98 49 "%sexpecting range.size to change on %s", 99 50 ret < 0 ? "not " : "", 100 51 ret < 0 ? "failure" : "success"); 101 - } while (ret >= 0 ? range.size : save_errno == EINTR); 102 52 103 - TEST_ASSERT(range.size == left, 104 - "Completed with %lld bytes left, expected %" PRId64, 105 - range.size, left); 53 + /* 54 + * Immediately retry prefaulting if KVM was interrupted by an 55 + * unrelated signal/event. 56 + */ 57 + if (ret < 0 && save_errno == EINTR) 58 + continue; 106 59 107 - if (left == 0) 108 - __TEST_ASSERT_VM_VCPU_IOCTL(!ret, "KVM_PRE_FAULT_MEMORY", ret, vcpu->vm); 60 + /* 61 + * Tell the worker to recreate the slot in order to complete 62 + * prefaulting (if prefault didn't already succeed before the 63 + * slot was deleted) and/or to prepare for the next testcase. 64 + * Wait for the worker to exit so that the next invocation of 65 + * prefaulting is guaranteed to complete (assuming no KVM bugs). 66 + */ 67 + if (!slot_recreated) { 68 + WRITE_ONCE(data.recreate_slot, true); 69 + pthread_join(slot_worker, NULL); 70 + slot_recreated = true; 71 + 72 + /* 73 + * Retry prefaulting to get a stable result, i.e. to 74 + * avoid seeing random EAGAIN failures. Don't retry if 75 + * prefaulting already succeeded, as KVM disallows 76 + * prefaulting with size=0, i.e. blindly retrying would 77 + * result in test failures due to EINVAL. KVM should 78 + * always return success if all bytes are prefaulted, 79 + * i.e. there is no need to guard against EAGAIN being 80 + * returned. 81 + */ 82 + if (range.size) 83 + continue; 84 + } 85 + 86 + /* 87 + * All done if there are no remaining bytes to prefault, or if 88 + * prefaulting failed (EINTR was handled above, and EAGAIN due 89 + * to prefaulting a memslot that's being actively deleted should 90 + * be impossible since the memslot has already been recreated). 91 + */ 92 + if (!range.size || ret < 0) 93 + break; 94 + } 95 + 96 + TEST_ASSERT(range.size == expected_left, 97 + "Completed with %llu bytes left, expected %lu", 98 + range.size, expected_left); 99 + 100 + /* 101 + * Assert success if prefaulting the entire range should succeed, i.e. 102 + * complete with no bytes remaining. Otherwise prefaulting should have 103 + * failed due to ENOENT (due to RET_PF_EMULATE for emulated MMIO when 104 + * no memslot exists). 105 + */ 106 + if (!expected_left) 107 + TEST_ASSERT_VM_VCPU_IOCTL(!ret, KVM_PRE_FAULT_MEMORY, ret, vcpu->vm); 109 108 else 110 - /* No memory slot causes RET_PF_EMULATE. it results in -ENOENT. */ 111 - __TEST_ASSERT_VM_VCPU_IOCTL(ret && save_errno == ENOENT, 112 - "KVM_PRE_FAULT_MEMORY", ret, vcpu->vm); 109 + TEST_ASSERT_VM_VCPU_IOCTL(ret && save_errno == ENOENT, 110 + KVM_PRE_FAULT_MEMORY, ret, vcpu->vm); 113 111 } 114 112 115 113 static void __test_pre_fault_memory(unsigned long vm_type, bool private) ··· 193 97 194 98 if (private) 195 99 vm_mem_set_private(vm, guest_test_phys_mem, TEST_SIZE); 196 - pre_fault_memory(vcpu, guest_test_phys_mem, SZ_2M, 0); 197 - pre_fault_memory(vcpu, guest_test_phys_mem + SZ_2M, PAGE_SIZE * 2, PAGE_SIZE); 198 - pre_fault_memory(vcpu, guest_test_phys_mem + TEST_SIZE, PAGE_SIZE, PAGE_SIZE); 100 + 101 + pre_fault_memory(vcpu, guest_test_phys_mem, 0, SZ_2M, 0, private); 102 + pre_fault_memory(vcpu, guest_test_phys_mem, SZ_2M, PAGE_SIZE * 2, PAGE_SIZE, private); 103 + pre_fault_memory(vcpu, guest_test_phys_mem, TEST_SIZE, PAGE_SIZE, PAGE_SIZE, private); 199 104 200 105 vcpu_args_set(vcpu, 1, guest_test_virt_mem); 201 106 vcpu_run(vcpu);
+7 -9
tools/testing/selftests/kvm/s390/ucontrol_test.c
··· 142 142 self->kvm_run_size = ioctl(self->kvm_fd, KVM_GET_VCPU_MMAP_SIZE, NULL); 143 143 ASSERT_GE(self->kvm_run_size, sizeof(struct kvm_run)) 144 144 TH_LOG(KVM_IOCTL_ERROR(KVM_GET_VCPU_MMAP_SIZE, self->kvm_run_size)); 145 - self->run = (struct kvm_run *)mmap(NULL, self->kvm_run_size, 146 - PROT_READ | PROT_WRITE, MAP_SHARED, self->vcpu_fd, 0); 147 - ASSERT_NE(self->run, MAP_FAILED); 145 + self->run = kvm_mmap(self->kvm_run_size, PROT_READ | PROT_WRITE, 146 + MAP_SHARED, self->vcpu_fd); 148 147 /** 149 148 * For virtual cpus that have been created with S390 user controlled 150 149 * virtual machines, the resulting vcpu fd can be memory mapped at page 151 150 * offset KVM_S390_SIE_PAGE_OFFSET in order to obtain a memory map of 152 151 * the virtual cpu's hardware control block. 153 152 */ 154 - self->sie_block = (struct kvm_s390_sie_block *)mmap(NULL, PAGE_SIZE, 155 - PROT_READ | PROT_WRITE, MAP_SHARED, 156 - self->vcpu_fd, KVM_S390_SIE_PAGE_OFFSET << PAGE_SHIFT); 157 - ASSERT_NE(self->sie_block, MAP_FAILED); 153 + self->sie_block = __kvm_mmap(PAGE_SIZE, PROT_READ | PROT_WRITE, 154 + MAP_SHARED, self->vcpu_fd, 155 + KVM_S390_SIE_PAGE_OFFSET << PAGE_SHIFT); 158 156 159 157 TH_LOG("VM created %p %p", self->run, self->sie_block); 160 158 ··· 184 186 185 187 FIXTURE_TEARDOWN(uc_kvm) 186 188 { 187 - munmap(self->sie_block, PAGE_SIZE); 188 - munmap(self->run, self->kvm_run_size); 189 + kvm_munmap(self->sie_block, PAGE_SIZE); 190 + kvm_munmap(self->run, self->kvm_run_size); 189 191 close(self->vcpu_fd); 190 192 close(self->vm_fd); 191 193 close(self->kvm_fd);
+8 -9
tools/testing/selftests/kvm/set_memory_region_test.c
··· 433 433 pr_info("Adding slots 0..%i, each memory region with %dK size\n", 434 434 (max_mem_slots - 1), MEM_REGION_SIZE >> 10); 435 435 436 - mem = mmap(NULL, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment, 437 - PROT_READ | PROT_WRITE, 438 - MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); 439 - TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host"); 436 + 437 + mem = kvm_mmap((size_t)max_mem_slots * MEM_REGION_SIZE + alignment, 438 + PROT_READ | PROT_WRITE, 439 + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1); 440 440 mem_aligned = (void *)(((size_t) mem + alignment - 1) & ~(alignment - 1)); 441 441 442 442 for (slot = 0; slot < max_mem_slots; slot++) ··· 446 446 mem_aligned + (uint64_t)slot * MEM_REGION_SIZE); 447 447 448 448 /* Check it cannot be added memory slots beyond the limit */ 449 - mem_extra = mmap(NULL, MEM_REGION_SIZE, PROT_READ | PROT_WRITE, 450 - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); 451 - TEST_ASSERT(mem_extra != MAP_FAILED, "Failed to mmap() host"); 449 + mem_extra = kvm_mmap(MEM_REGION_SIZE, PROT_READ | PROT_WRITE, 450 + MAP_PRIVATE | MAP_ANONYMOUS, -1); 452 451 453 452 ret = __vm_set_user_memory_region(vm, max_mem_slots, 0, 454 453 (uint64_t)max_mem_slots * MEM_REGION_SIZE, ··· 455 456 TEST_ASSERT(ret == -1 && errno == EINVAL, 456 457 "Adding one more memory slot should fail with EINVAL"); 457 458 458 - munmap(mem, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment); 459 - munmap(mem_extra, MEM_REGION_SIZE); 459 + kvm_munmap(mem, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment); 460 + kvm_munmap(mem_extra, MEM_REGION_SIZE); 460 461 kvm_vm_free(vm); 461 462 } 462 463
+1
virt/kvm/Kconfig
··· 113 113 bool 114 114 115 115 config KVM_GUEST_MEMFD 116 + depends on KVM_GENERIC_MMU_NOTIFIER 116 117 select XARRAY_MULTI 117 118 bool 118 119
+49 -26
virt/kvm/guest_memfd.c
··· 102 102 return filemap_grab_folio(inode->i_mapping, index); 103 103 } 104 104 105 - static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, 106 - pgoff_t end) 105 + static enum kvm_gfn_range_filter kvm_gmem_get_invalidate_filter(struct inode *inode) 106 + { 107 + if ((u64)inode->i_private & GUEST_MEMFD_FLAG_INIT_SHARED) 108 + return KVM_FILTER_SHARED; 109 + 110 + return KVM_FILTER_PRIVATE; 111 + } 112 + 113 + static void __kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, 114 + pgoff_t end, 115 + enum kvm_gfn_range_filter attr_filter) 107 116 { 108 117 bool flush = false, found_memslot = false; 109 118 struct kvm_memory_slot *slot; ··· 127 118 .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, 128 119 .slot = slot, 129 120 .may_block = true, 130 - /* guest memfd is relevant to only private mappings. */ 131 - .attr_filter = KVM_FILTER_PRIVATE, 121 + .attr_filter = attr_filter, 132 122 }; 133 123 134 124 if (!found_memslot) { ··· 147 139 KVM_MMU_UNLOCK(kvm); 148 140 } 149 141 150 - static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, 151 - pgoff_t end) 142 + static void kvm_gmem_invalidate_begin(struct inode *inode, pgoff_t start, 143 + pgoff_t end) 144 + { 145 + struct list_head *gmem_list = &inode->i_mapping->i_private_list; 146 + enum kvm_gfn_range_filter attr_filter; 147 + struct kvm_gmem *gmem; 148 + 149 + attr_filter = kvm_gmem_get_invalidate_filter(inode); 150 + 151 + list_for_each_entry(gmem, gmem_list, entry) 152 + __kvm_gmem_invalidate_begin(gmem, start, end, attr_filter); 153 + } 154 + 155 + static void __kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, 156 + pgoff_t end) 152 157 { 153 158 struct kvm *kvm = gmem->kvm; 154 159 ··· 172 151 } 173 152 } 174 153 175 - static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) 154 + static void kvm_gmem_invalidate_end(struct inode *inode, pgoff_t start, 155 + pgoff_t end) 176 156 { 177 157 struct list_head *gmem_list = &inode->i_mapping->i_private_list; 158 + struct kvm_gmem *gmem; 159 + 160 + list_for_each_entry(gmem, gmem_list, entry) 161 + __kvm_gmem_invalidate_end(gmem, start, end); 162 + } 163 + 164 + static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) 165 + { 178 166 pgoff_t start = offset >> PAGE_SHIFT; 179 167 pgoff_t end = (offset + len) >> PAGE_SHIFT; 180 - struct kvm_gmem *gmem; 181 168 182 169 /* 183 170 * Bindings must be stable across invalidation to ensure the start+end ··· 193 164 */ 194 165 filemap_invalidate_lock(inode->i_mapping); 195 166 196 - list_for_each_entry(gmem, gmem_list, entry) 197 - kvm_gmem_invalidate_begin(gmem, start, end); 167 + kvm_gmem_invalidate_begin(inode, start, end); 198 168 199 169 truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); 200 170 201 - list_for_each_entry(gmem, gmem_list, entry) 202 - kvm_gmem_invalidate_end(gmem, start, end); 171 + kvm_gmem_invalidate_end(inode, start, end); 203 172 204 173 filemap_invalidate_unlock(inode->i_mapping); 205 174 ··· 307 280 * Zap all SPTEs pointed at by this file. Do not free the backing 308 281 * memory, as its lifetime is associated with the inode, not the file. 309 282 */ 310 - kvm_gmem_invalidate_begin(gmem, 0, -1ul); 311 - kvm_gmem_invalidate_end(gmem, 0, -1ul); 283 + __kvm_gmem_invalidate_begin(gmem, 0, -1ul, 284 + kvm_gmem_get_invalidate_filter(inode)); 285 + __kvm_gmem_invalidate_end(gmem, 0, -1ul); 312 286 313 287 list_del(&gmem->entry); 314 288 ··· 354 326 vm_fault_t ret = VM_FAULT_LOCKED; 355 327 356 328 if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) 329 + return VM_FAULT_SIGBUS; 330 + 331 + if (!((u64)inode->i_private & GUEST_MEMFD_FLAG_INIT_SHARED)) 357 332 return VM_FAULT_SIGBUS; 358 333 359 334 folio = kvm_gmem_get_folio(inode, vmf->pgoff); ··· 431 400 432 401 static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *folio) 433 402 { 434 - struct list_head *gmem_list = &mapping->i_private_list; 435 - struct kvm_gmem *gmem; 436 403 pgoff_t start, end; 437 404 438 405 filemap_invalidate_lock_shared(mapping); ··· 438 409 start = folio->index; 439 410 end = start + folio_nr_pages(folio); 440 411 441 - list_for_each_entry(gmem, gmem_list, entry) 442 - kvm_gmem_invalidate_begin(gmem, start, end); 412 + kvm_gmem_invalidate_begin(mapping->host, start, end); 443 413 444 414 /* 445 415 * Do not truncate the range, what action is taken in response to the ··· 449 421 * error to userspace. 450 422 */ 451 423 452 - list_for_each_entry(gmem, gmem_list, entry) 453 - kvm_gmem_invalidate_end(gmem, start, end); 424 + kvm_gmem_invalidate_end(mapping->host, start, end); 454 425 455 426 filemap_invalidate_unlock_shared(mapping); 456 427 ··· 485 458 .setattr = kvm_gmem_setattr, 486 459 }; 487 460 488 - bool __weak kvm_arch_supports_gmem_mmap(struct kvm *kvm) 461 + bool __weak kvm_arch_supports_gmem_init_shared(struct kvm *kvm) 489 462 { 490 463 return true; 491 464 } ··· 549 522 { 550 523 loff_t size = args->size; 551 524 u64 flags = args->flags; 552 - u64 valid_flags = 0; 553 525 554 - if (kvm_arch_supports_gmem_mmap(kvm)) 555 - valid_flags |= GUEST_MEMFD_FLAG_MMAP; 556 - 557 - if (flags & ~valid_flags) 526 + if (flags & ~kvm_gmem_get_supported_flags(kvm)) 558 527 return -EINVAL; 559 528 560 529 if (size <= 0 || !PAGE_ALIGNED(size))
+2 -2
virt/kvm/kvm_main.c
··· 4928 4928 #ifdef CONFIG_KVM_GUEST_MEMFD 4929 4929 case KVM_CAP_GUEST_MEMFD: 4930 4930 return 1; 4931 - case KVM_CAP_GUEST_MEMFD_MMAP: 4932 - return !kvm || kvm_arch_supports_gmem_mmap(kvm); 4931 + case KVM_CAP_GUEST_MEMFD_FLAGS: 4932 + return kvm_gmem_get_supported_flags(kvm); 4933 4933 #endif 4934 4934 default: 4935 4935 break;