KVM: SVM: force new asid on vcpu migration

If a migrated vcpu matches the asid_generation value of the target pcpu,
there will be no TLB flush via TLB_CONTROL_FLUSH_ALL_ASID.

The check for vcpu.cpu in pre_svm_run is meaningless since svm_vcpu_load
already updated it on schedule in.

Such vcpu will VMRUN with stale TLB entries.

Based on original patch from Joerg Roedel (http://patchwork.kernel.org/patch/10021/)

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>

authored by Marcelo Tosatti and committed by Avi Kivity 4b656b12 d6289b93

+3 -3
+3 -3
arch/x86/kvm/svm.c
··· 711 svm->vmcb->control.tsc_offset += delta; 712 vcpu->cpu = cpu; 713 kvm_migrate_timers(vcpu); 714 } 715 716 for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) ··· 1032 svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID; 1033 } 1034 1035 - svm->vcpu.cpu = svm_data->cpu; 1036 svm->asid_generation = svm_data->asid_generation; 1037 svm->vmcb->control.asid = svm_data->next_asid++; 1038 } ··· 2300 struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu); 2301 2302 svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING; 2303 - if (svm->vcpu.cpu != cpu || 2304 - svm->asid_generation != svm_data->asid_generation) 2305 new_asid(svm, svm_data); 2306 } 2307
··· 711 svm->vmcb->control.tsc_offset += delta; 712 vcpu->cpu = cpu; 713 kvm_migrate_timers(vcpu); 714 + svm->asid_generation = 0; 715 } 716 717 for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++) ··· 1031 svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID; 1032 } 1033 1034 svm->asid_generation = svm_data->asid_generation; 1035 svm->vmcb->control.asid = svm_data->next_asid++; 1036 } ··· 2300 struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu); 2301 2302 svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING; 2303 + /* FIXME: handle wraparound of asid_generation */ 2304 + if (svm->asid_generation != svm_data->asid_generation) 2305 new_asid(svm, svm_data); 2306 } 2307