Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
"Many patches, pretty much all of them small, that accumulated while I
was on vacation.

ARM:

- Remove the last leftovers of the ill-fated FPSIMD host state
mapping at EL2 stage-1

- Fix unexpected advertisement to the guest of unimplemented S2 base
granule sizes

- Gracefully fail initialising pKVM if the interrupt controller isn't
GICv3

- Also gracefully fail initialising pKVM if the carveout allocation
fails

- Fix the computing of the minimum MMIO range required for the host
on stage-2 fault

- Fix the generation of the GICv3 Maintenance Interrupt in nested
mode

x86:

- Reject SEV{-ES} intra-host migration if one or more vCPUs are
actively being created, so as not to create a non-SEV{-ES} vCPU in
an SEV{-ES} VM

- Use a pre-allocated, per-vCPU buffer for handling de-sparsification
of vCPU masks in Hyper-V hypercalls; fixes a "stack frame too
large" issue

- Allow out-of-range/invalid Xen event channel ports when configuring
IRQ routing, to avoid dictating a specific ioctl() ordering to
userspace

- Conditionally reschedule when setting memory attributes to avoid
soft lockups when userspace converts huge swaths of memory to/from
private

- Add back MWAIT as a required feature for the MONITOR/MWAIT selftest

- Add a missing field in struct sev_data_snp_launch_start that
resulted in the guest-visible workarounds field being filled at the
wrong offset

- Skip non-canonical address when processing Hyper-V PV TLB flushes
to avoid VM-Fail on INVVPID

- Advertise supported TDX TDVMCALLs to userspace

- Pass SetupEventNotifyInterrupt arguments to userspace

- Fix TSC frequency underflow"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: avoid underflow when scaling TSC frequency
KVM: arm64: Remove kvm_arch_vcpu_run_map_fp()
KVM: arm64: Fix handling of FEAT_GTG for unimplemented granule sizes
KVM: arm64: Don't free hyp pages with pKVM on GICv2
KVM: arm64: Fix error path in init_hyp_mode()
KVM: arm64: Adjust range correctly during host stage-2 faults
KVM: arm64: nv: Fix MI line level calculation in vgic_v3_nested_update_mi()
KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush
KVM: SVM: Add missing member in SNP_LAUNCH_START command structure
Documentation: KVM: Fix unexpected unindent warnings
KVM: selftests: Add back the missing check of MONITOR/MWAIT availability
KVM: Allow CPU to reschedule while setting per-page memory attributes
KVM: x86/xen: Allow 'out of range' event channel ports in IRQ routing table.
KVM: x86/hyper-v: Use preallocated per-vCPU buffer for de-sparsified vCPU masks
KVM: SVM: Initialize vmsa_pa in VMCB to INVALID_PAGE if VMSA page is NULL
KVM: SVM: Reject SEV{-ES} intra host migration if vCPU creation is in-flight
KVM: TDX: Report supported optional TDVMCALLs in TDX capabilities
KVM: TDX: Exit to userspace for SetupEventNotifyInterrupt

+165 -70
+21 -14
Documentation/virt/kvm/api.rst
··· 7196 7196 u64 leaf; 7197 7197 u64 r11, r12, r13, r14; 7198 7198 } get_tdvmcall_info; 7199 + struct { 7200 + u64 ret; 7201 + u64 vector; 7202 + } setup_event_notify; 7199 7203 }; 7200 7204 } tdx; 7201 7205 ··· 7214 7210 inputs and outputs of the TDVMCALL. Currently the following values of 7215 7211 ``nr`` are defined: 7216 7212 7217 - * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7218 - signed by a service hosting TD-Quoting Enclave operating on the host. 7219 - Parameters and return value are in the ``get_quote`` field of the union. 7220 - The ``gpa`` field and ``size`` specify the guest physical address 7221 - (without the shared bit set) and the size of a shared-memory buffer, in 7222 - which the TDX guest passes a TD Report. The ``ret`` field represents 7223 - the return value of the GetQuote request. When the request has been 7224 - queued successfully, the TDX guest can poll the status field in the 7225 - shared-memory area to check whether the Quote generation is completed or 7226 - not. When completed, the generated Quote is returned via the same buffer. 7213 + * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7214 + signed by a service hosting TD-Quoting Enclave operating on the host. 7215 + Parameters and return value are in the ``get_quote`` field of the union. 7216 + The ``gpa`` field and ``size`` specify the guest physical address 7217 + (without the shared bit set) and the size of a shared-memory buffer, in 7218 + which the TDX guest passes a TD Report. The ``ret`` field represents 7219 + the return value of the GetQuote request. When the request has been 7220 + queued successfully, the TDX guest can poll the status field in the 7221 + shared-memory area to check whether the Quote generation is completed or 7222 + not. When completed, the generated Quote is returned via the same buffer. 7227 7223 7228 - * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7229 - status of TDVMCALLs. The output values for the given leaf should be 7230 - placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7231 - field of the union. 7224 + * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7225 + status of TDVMCALLs. The output values for the given leaf should be 7226 + placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7227 + field of the union. 7228 + 7229 + * ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to 7230 + set up a notification interrupt for vector ``vector``. 7232 7231 7233 7232 KVM may add support for more values in the future that may cause a userspace 7234 7233 exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
+14 -1
Documentation/virt/kvm/x86/intel-tdx.rst
··· 79 79 struct kvm_tdx_capabilities { 80 80 __u64 supported_attrs; 81 81 __u64 supported_xfam; 82 - __u64 reserved[254]; 82 + 83 + /* TDG.VP.VMCALL hypercalls executed in kernel and forwarded to 84 + * userspace, respectively 85 + */ 86 + __u64 kernel_tdvmcallinfo_1_r11; 87 + __u64 user_tdvmcallinfo_1_r11; 88 + 89 + /* TDG.VP.VMCALL instruction executions subfunctions executed in kernel 90 + * and forwarded to userspace, respectively 91 + */ 92 + __u64 kernel_tdvmcallinfo_1_r12; 93 + __u64 user_tdvmcallinfo_1_r12; 94 + 95 + __u64 reserved[250]; 83 96 84 97 /* Configurable CPUID bits for userspace */ 85 98 struct kvm_cpuid2 cpuid;
-1
arch/arm64/include/asm/kvm_host.h
··· 1480 1480 struct reg_mask_range *range); 1481 1481 1482 1482 /* Guest/host FPSIMD coordination helpers */ 1483 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); 1484 1483 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); 1485 1484 void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); 1486 1485 void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+10 -6
arch/arm64/kvm/arm.c
··· 825 825 if (!kvm_arm_vcpu_is_finalized(vcpu)) 826 826 return -EPERM; 827 827 828 - ret = kvm_arch_vcpu_run_map_fp(vcpu); 829 - if (ret) 830 - return ret; 831 - 832 828 if (likely(vcpu_has_run_once(vcpu))) 833 829 return 0; 834 830 ··· 2125 2129 2126 2130 static void cpu_hyp_uninit(void *discard) 2127 2131 { 2128 - if (__this_cpu_read(kvm_hyp_initialized)) { 2132 + if (!is_protected_kvm_enabled() && __this_cpu_read(kvm_hyp_initialized)) { 2129 2133 cpu_hyp_reset(); 2130 2134 __this_cpu_write(kvm_hyp_initialized, 0); 2131 2135 } ··· 2341 2345 2342 2346 free_hyp_pgds(); 2343 2347 for_each_possible_cpu(cpu) { 2348 + if (per_cpu(kvm_hyp_initialized, cpu)) 2349 + continue; 2350 + 2344 2351 free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT); 2345 - free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2352 + 2353 + if (!kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]) 2354 + continue; 2346 2355 2347 2356 if (free_sve) { 2348 2357 struct cpu_sve_state *sve_state; ··· 2355 2354 sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state; 2356 2355 free_pages((unsigned long) sve_state, pkvm_host_sve_state_order()); 2357 2356 } 2357 + 2358 + free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order()); 2359 + 2358 2360 } 2359 2361 } 2360 2362
-26
arch/arm64/kvm/fpsimd.c
··· 15 15 #include <asm/sysreg.h> 16 16 17 17 /* 18 - * Called on entry to KVM_RUN unless this vcpu previously ran at least 19 - * once and the most recent prior KVM_RUN for this vcpu was called from 20 - * the same task as current (highly likely). 21 - * 22 - * This is guaranteed to execute before kvm_arch_vcpu_load_fp(vcpu), 23 - * such that on entering hyp the relevant parts of current are already 24 - * mapped. 25 - */ 26 - int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) 27 - { 28 - struct user_fpsimd_state *fpsimd = &current->thread.uw.fpsimd_state; 29 - int ret; 30 - 31 - /* pKVM has its own tracking of the host fpsimd state. */ 32 - if (is_protected_kvm_enabled()) 33 - return 0; 34 - 35 - /* Make sure the host task fpsimd state is visible to hyp: */ 36 - ret = kvm_share_hyp(fpsimd, fpsimd + 1); 37 - if (ret) 38 - return ret; 39 - 40 - return 0; 41 - } 42 - 43 - /* 44 18 * Prepare vcpu for saving the host's FPSIMD state and loading the guest's. 45 19 * The actual loading is done by the FPSIMD access trap taken to hyp. 46 20 *
+12 -8
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 479 479 { 480 480 struct kvm_mem_range cur; 481 481 kvm_pte_t pte; 482 + u64 granule; 482 483 s8 level; 483 484 int ret; 484 485 ··· 497 496 return -EPERM; 498 497 } 499 498 500 - do { 501 - u64 granule = kvm_granule_size(level); 499 + for (; level <= KVM_PGTABLE_LAST_LEVEL; level++) { 500 + if (!kvm_level_supports_block_mapping(level)) 501 + continue; 502 + granule = kvm_granule_size(level); 502 503 cur.start = ALIGN_DOWN(addr, granule); 503 504 cur.end = cur.start + granule; 504 - level++; 505 - } while ((level <= KVM_PGTABLE_LAST_LEVEL) && 506 - !(kvm_level_supports_block_mapping(level) && 507 - range_included(&cur, range))); 505 + if (!range_included(&cur, range)) 506 + continue; 507 + *range = cur; 508 + return 0; 509 + } 508 510 509 - *range = cur; 511 + WARN_ON(1); 510 512 511 - return 0; 513 + return -EINVAL; 512 514 } 513 515 514 516 int host_stage2_idmap_locked(phys_addr_t addr, u64 size,
+23 -3
arch/arm64/kvm/nested.c
··· 1402 1402 } 1403 1403 } 1404 1404 1405 + #define has_tgran_2(__r, __sz) \ 1406 + ({ \ 1407 + u64 _s1, _s2, _mmfr0 = __r; \ 1408 + \ 1409 + _s2 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1410 + TGRAN##__sz##_2, _mmfr0); \ 1411 + \ 1412 + _s1 = SYS_FIELD_GET(ID_AA64MMFR0_EL1, \ 1413 + TGRAN##__sz, _mmfr0); \ 1414 + \ 1415 + ((_s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_NI && \ 1416 + _s2 != ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz) || \ 1417 + (_s2 == ID_AA64MMFR0_EL1_TGRAN##__sz##_2_TGRAN##__sz && \ 1418 + _s1 != ID_AA64MMFR0_EL1_TGRAN##__sz##_NI)); \ 1419 + }) 1405 1420 /* 1406 1421 * Our emulated CPU doesn't support all the possible features. For the 1407 1422 * sake of simplicity (and probably mental sanity), wipe out a number ··· 1426 1411 */ 1427 1412 u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val) 1428 1413 { 1414 + u64 orig_val = val; 1415 + 1429 1416 switch (reg) { 1430 1417 case SYS_ID_AA64ISAR0_EL1: 1431 1418 /* Support everything but TME */ ··· 1497 1480 */ 1498 1481 switch (PAGE_SIZE) { 1499 1482 case SZ_4K: 1500 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1483 + if (has_tgran_2(orig_val, 4)) 1484 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN4_2, IMP); 1501 1485 fallthrough; 1502 1486 case SZ_16K: 1503 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1487 + if (has_tgran_2(orig_val, 16)) 1488 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN16_2, IMP); 1504 1489 fallthrough; 1505 1490 case SZ_64K: 1506 - val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1491 + if (has_tgran_2(orig_val, 64)) 1492 + val |= SYS_FIELD_PREP_ENUM(ID_AA64MMFR0_EL1, TGRAN64_2, IMP); 1507 1493 break; 1508 1494 } 1509 1495
+1 -3
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 401 401 { 402 402 bool level; 403 403 404 - level = __vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En; 405 - if (level) 406 - level &= vgic_v3_get_misr(vcpu); 404 + level = (__vcpu_sys_reg(vcpu, ICH_HCR_EL2) & ICH_HCR_EL2_En) && vgic_v3_get_misr(vcpu); 407 405 kvm_vgic_inject_irq(vcpu->kvm, vcpu, 408 406 vcpu->kvm->arch.vgic.mi_intid, level, vcpu); 409 407 }
+6 -1
arch/x86/include/asm/kvm_host.h
··· 700 700 701 701 struct kvm_vcpu_hv_tlb_flush_fifo tlb_flush_fifo[HV_NR_TLB_FLUSH_FIFOS]; 702 702 703 - /* Preallocated buffer for handling hypercalls passing sparse vCPU set */ 703 + /* 704 + * Preallocated buffers for handling hypercalls that pass sparse vCPU 705 + * sets (for high vCPU counts, they're too large to comfortably fit on 706 + * the stack). 707 + */ 704 708 u64 sparse_banks[HV_MAX_SPARSE_VCPU_BANKS]; 709 + DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 705 710 706 711 struct hv_vp_assist_page vp_assist_page; 707 712
+1
arch/x86/include/asm/shared/tdx.h
··· 72 72 #define TDVMCALL_MAP_GPA 0x10001 73 73 #define TDVMCALL_GET_QUOTE 0x10002 74 74 #define TDVMCALL_REPORT_FATAL_ERROR 0x10003 75 + #define TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT 0x10004ULL 75 76 76 77 /* 77 78 * TDG.VP.VMCALL Status Codes (returned in R10)
+7 -1
arch/x86/include/uapi/asm/kvm.h
··· 965 965 struct kvm_tdx_capabilities { 966 966 __u64 supported_attrs; 967 967 __u64 supported_xfam; 968 - __u64 reserved[254]; 968 + 969 + __u64 kernel_tdvmcallinfo_1_r11; 970 + __u64 user_tdvmcallinfo_1_r11; 971 + __u64 kernel_tdvmcallinfo_1_r12; 972 + __u64 user_tdvmcallinfo_1_r12; 973 + 974 + __u64 reserved[250]; 969 975 970 976 /* Configurable CPUID bits for userspace */ 971 977 struct kvm_cpuid2 cpuid;
+4 -1
arch/x86/kvm/hyperv.c
··· 1979 1979 if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY) 1980 1980 goto out_flush_all; 1981 1981 1982 + if (is_noncanonical_invlpg_address(entries[i], vcpu)) 1983 + continue; 1984 + 1982 1985 /* 1983 1986 * Lower 12 bits of 'address' encode the number of additional 1984 1987 * pages to flush. ··· 2004 2001 static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc) 2005 2002 { 2006 2003 struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); 2004 + unsigned long *vcpu_mask = hv_vcpu->vcpu_mask; 2007 2005 u64 *sparse_banks = hv_vcpu->sparse_banks; 2008 2006 struct kvm *kvm = vcpu->kvm; 2009 2007 struct hv_tlb_flush_ex flush_ex; 2010 2008 struct hv_tlb_flush flush; 2011 - DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS); 2012 2009 struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo; 2013 2010 /* 2014 2011 * Normally, there can be no more than 'KVM_HV_TLB_FLUSH_FIFO_SIZE'
+10 -2
arch/x86/kvm/svm/sev.c
··· 1971 1971 struct kvm_vcpu *src_vcpu; 1972 1972 unsigned long i; 1973 1973 1974 + if (src->created_vcpus != atomic_read(&src->online_vcpus) || 1975 + dst->created_vcpus != atomic_read(&dst->online_vcpus)) 1976 + return -EBUSY; 1977 + 1974 1978 if (!sev_es_guest(src)) 1975 1979 return 0; 1976 1980 ··· 4449 4445 * the VMSA will be NULL if this vCPU is the destination for intrahost 4450 4446 * migration, and will be copied later. 4451 4447 */ 4452 - if (svm->sev_es.vmsa && !svm->sev_es.snp_has_guest_vmsa) 4453 - svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4448 + if (!svm->sev_es.snp_has_guest_vmsa) { 4449 + if (svm->sev_es.vmsa) 4450 + svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa); 4451 + else 4452 + svm->vmcb->control.vmsa_pa = INVALID_PAGE; 4453 + } 4454 4454 4455 4455 if (cpu_feature_enabled(X86_FEATURE_ALLOWED_SEV_FEATURES)) 4456 4456 svm->vmcb->control.allowed_sev_features = sev->vmsa_features |
+30
arch/x86/kvm/vmx/tdx.c
··· 173 173 tdx_clear_unsupported_cpuid(entry); 174 174 } 175 175 176 + #define TDVMCALLINFO_GET_QUOTE BIT(0) 177 + #define TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT BIT(1) 178 + 176 179 static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf, 177 180 struct kvm_tdx_capabilities *caps) 178 181 { ··· 190 187 return -EIO; 191 188 192 189 caps->cpuid.nent = td_conf->num_cpuid_config; 190 + 191 + caps->user_tdvmcallinfo_1_r11 = 192 + TDVMCALLINFO_GET_QUOTE | 193 + TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT; 193 194 194 195 for (i = 0; i < td_conf->num_cpuid_config; i++) 195 196 td_init_cpuid_entry2(&caps->cpuid.entries[i], i); ··· 1537 1530 return 0; 1538 1531 } 1539 1532 1533 + static int tdx_setup_event_notify_interrupt(struct kvm_vcpu *vcpu) 1534 + { 1535 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1536 + u64 vector = tdx->vp_enter_args.r12; 1537 + 1538 + if (vector < 32 || vector > 255) { 1539 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1540 + return 1; 1541 + } 1542 + 1543 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1544 + vcpu->run->tdx.flags = 0; 1545 + vcpu->run->tdx.nr = TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT; 1546 + vcpu->run->tdx.setup_event_notify.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1547 + vcpu->run->tdx.setup_event_notify.vector = vector; 1548 + 1549 + vcpu->arch.complete_userspace_io = tdx_complete_simple; 1550 + 1551 + return 0; 1552 + } 1553 + 1540 1554 static int handle_tdvmcall(struct kvm_vcpu *vcpu) 1541 1555 { 1542 1556 switch (tdvmcall_leaf(vcpu)) { ··· 1569 1541 return tdx_get_td_vm_call_info(vcpu); 1570 1542 case TDVMCALL_GET_QUOTE: 1571 1543 return tdx_get_quote(vcpu); 1544 + case TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT: 1545 + return tdx_setup_event_notify_interrupt(vcpu); 1572 1546 default: 1573 1547 break; 1574 1548 }
+3 -1
arch/x86/kvm/x86.c
··· 3258 3258 3259 3259 /* With all the info we got, fill in the values */ 3260 3260 3261 - if (kvm_caps.has_tsc_control) 3261 + if (kvm_caps.has_tsc_control) { 3262 3262 tgt_tsc_khz = kvm_scale_tsc(tgt_tsc_khz, 3263 3263 v->arch.l1_tsc_scaling_ratio); 3264 + tgt_tsc_khz = tgt_tsc_khz ? : 1; 3265 + } 3264 3266 3265 3267 if (unlikely(vcpu->hw_tsc_khz != tgt_tsc_khz)) { 3266 3268 kvm_get_time_scale(NSEC_PER_SEC, tgt_tsc_khz * 1000LL,
+13 -2
arch/x86/kvm/xen.c
··· 1971 1971 { 1972 1972 struct kvm_vcpu *vcpu; 1973 1973 1974 - if (ue->u.xen_evtchn.port >= max_evtchn_port(kvm)) 1975 - return -EINVAL; 1974 + /* 1975 + * Don't check for the port being within range of max_evtchn_port(). 1976 + * Userspace can configure what ever targets it likes; events just won't 1977 + * be delivered if/while the target is invalid, just like userspace can 1978 + * configure MSIs which target non-existent APICs. 1979 + * 1980 + * This allow on Live Migration and Live Update, the IRQ routing table 1981 + * can be restored *independently* of other things like creating vCPUs, 1982 + * without imposing an ordering dependency on userspace. In this 1983 + * particular case, the problematic ordering would be with setting the 1984 + * Xen 'long mode' flag, which changes max_evtchn_port() to allow 4096 1985 + * instead of 1024 event channels. 1986 + */ 1976 1987 1977 1988 /* We only support 2 level event channels for now */ 1978 1989 if (ue->u.xen_evtchn.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
+2
include/linux/psp-sev.h
··· 594 594 * @imi_en: launch flow is launching an IMI (Incoming Migration Image) for the 595 595 * purpose of guest-assisted migration. 596 596 * @rsvd: reserved 597 + * @desired_tsc_khz: hypervisor desired mean TSC freq in kHz of the guest 597 598 * @gosvw: guest OS-visible workarounds, as defined by hypervisor 598 599 */ 599 600 struct sev_data_snp_launch_start { ··· 604 603 u32 ma_en:1; /* In */ 605 604 u32 imi_en:1; /* In */ 606 605 u32 rsvd:30; 606 + u32 desired_tsc_khz; /* In */ 607 607 u8 gosvw[16]; /* In */ 608 608 } __packed; 609 609
+4
include/uapi/linux/kvm.h
··· 467 467 __u64 leaf; 468 468 __u64 r11, r12, r13, r14; 469 469 } get_tdvmcall_info; 470 + struct { 471 + __u64 ret; 472 + __u64 vector; 473 + } setup_event_notify; 470 474 }; 471 475 } tdx; 472 476 /* Fix the size of the union. */
+1
tools/testing/selftests/kvm/x86/monitor_mwait_test.c
··· 74 74 int testcase; 75 75 char test[80]; 76 76 77 + TEST_REQUIRE(this_cpu_has(X86_FEATURE_MWAIT)); 77 78 TEST_REQUIRE(kvm_has_cap(KVM_CAP_DISABLE_QUIRKS2)); 78 79 79 80 ksft_print_header();
+3
virt/kvm/kvm_main.c
··· 2572 2572 r = xa_reserve(&kvm->mem_attr_array, i, GFP_KERNEL_ACCOUNT); 2573 2573 if (r) 2574 2574 goto out_unlock; 2575 + 2576 + cond_resched(); 2575 2577 } 2576 2578 2577 2579 kvm_handle_gfn_range(kvm, &pre_set_range); ··· 2582 2580 r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, 2583 2581 GFP_KERNEL_ACCOUNT)); 2584 2582 KVM_BUG_ON(r, kvm); 2583 + cond_resched(); 2585 2584 } 2586 2585 2587 2586 kvm_handle_gfn_range(kvm, &post_set_range);