Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/sme: Use percpu boolean to control WBINVD during kexec

TL;DR:

Prepare to unify how TDX and SME do cache flushing during kexec by
making a percpu boolean control whether to do the WBINVD.

-- Background --

On SME platforms, dirty cacheline aliases with and without encryption
bit can coexist, and the CPU can flush them back to memory in random
order. During kexec, the caches must be flushed before jumping to the
new kernel otherwise the dirty cachelines could silently corrupt the
memory used by the new kernel due to different encryption property.

TDX also needs a cache flush during kexec for the same reason. It would
be good to have a generic way to flush the cache instead of scattering
checks for each feature all around.

When SME is enabled, the kernel basically encrypts all memory including
the kernel itself and a simple memory write from the kernel could dirty
cachelines. Currently, the kernel uses WBINVD to flush the cache for
SME during kexec in two places:

1) the one in stop_this_cpu() for all remote CPUs when the kexec-ing CPU
stops them;
2) the one in the relocate_kernel() where the kexec-ing CPU jumps to the
new kernel.

-- Solution --

Unlike SME, TDX can only dirty cachelines when it is used (i.e., when
SEAMCALLs are performed). Since there are no more SEAMCALLs after the
aforementioned WBINVDs, leverage this for TDX.

To unify the approach for SME and TDX, use a percpu boolean to indicate
the cache may be in an incoherent state and needs flushing during kexec,
and set the boolean for SME. TDX can then leverage it.

While SME could use a global flag (since it's enabled at early boot and
enabled on all CPUs), the percpu flag fits TDX better:

The percpu flag can be set when a CPU makes a SEAMCALL, and cleared when
another WBINVD on the CPU obviates the need for a kexec-time WBINVD.
Saving kexec-time WBINVD is valuable, because there is an existing
race[*] where kexec could proceed while another CPU is active. WBINVD
could make this race worse, so it's worth skipping it when possible.

-- Side effect to SME --

Today the first WBINVD in the stop_this_cpu() is performed when SME is
*supported* by the platform, and the second WBINVD is done in
relocate_kernel() when SME is *activated* by the kernel. Make things
simple by changing to do the second WBINVD when the platform supports
SME. This allows the kernel to simply turn on this percpu boolean when
bringing up a CPU by checking whether the platform supports SME.

No other functional change intended.

[*] The aforementioned race:

During kexec native_stop_other_cpus() is called to stop all remote CPUs
before jumping to the new kernel. native_stop_other_cpus() firstly
sends normal REBOOT vector IPIs to stop remote CPUs and waits them to
stop. If that times out, it sends NMI to stop the CPUs that are still
alive. The race happens when native_stop_other_cpus() has to send NMIs
and could potentially result in the system hang (for more information
please see [1]).

Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/kvm/b963fcd60abe26c7ec5dc20b42f1a2ebbcc72397.1750934177.git.kai.huang@intel.com/ [1]
Link: https://lore.kernel.org/all/20250901160930.1785244-3-pbonzini%40redhat.com

authored by

Kai Huang and committed by
Dave Hansen
83214a77 744b02f6

+52 -22
+2 -2
arch/x86/include/asm/kexec.h
··· 17 17 18 18 #include <linux/bits.h> 19 19 20 - #define RELOC_KERNEL_PRESERVE_CONTEXT BIT(0) 21 - #define RELOC_KERNEL_HOST_MEM_ENC_ACTIVE BIT(1) 20 + #define RELOC_KERNEL_PRESERVE_CONTEXT BIT(0) 21 + #define RELOC_KERNEL_CACHE_INCOHERENT BIT(1) 22 22 23 23 #endif 24 24
+2
arch/x86/include/asm/processor.h
··· 731 731 void microcode_check(struct cpuinfo_x86 *prev_info); 732 732 void store_cpu_caps(struct cpuinfo_x86 *info); 733 733 734 + DECLARE_PER_CPU(bool, cache_state_incoherent); 735 + 734 736 enum l1tf_mitigations { 735 737 L1TF_MITIGATION_OFF, 736 738 L1TF_MITIGATION_AUTO,
+17
arch/x86/kernel/cpu/amd.c
··· 546 546 u64 msr; 547 547 548 548 /* 549 + * Mark using WBINVD is needed during kexec on processors that 550 + * support SME. This provides support for performing a successful 551 + * kexec when going from SME inactive to SME active (or vice-versa). 552 + * 553 + * The cache must be cleared so that if there are entries with the 554 + * same physical address, both with and without the encryption bit, 555 + * they don't race each other when flushed and potentially end up 556 + * with the wrong entry being committed to memory. 557 + * 558 + * Test the CPUID bit directly because with mem_encrypt=off the 559 + * BSP will clear the X86_FEATURE_SME bit and the APs will not 560 + * see it set after that. 561 + */ 562 + if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0))) 563 + __this_cpu_write(cache_state_incoherent, true); 564 + 565 + /* 549 566 * BIOS support is required for SME and SEV. 550 567 * For SME: If BIOS has enabled SME then adjust x86_phys_bits by 551 568 * the SME physical address space reduction value.
+10 -4
arch/x86/kernel/machine_kexec_64.c
··· 29 29 #include <asm/set_memory.h> 30 30 #include <asm/cpu.h> 31 31 #include <asm/efi.h> 32 + #include <asm/processor.h> 32 33 33 34 #ifdef CONFIG_ACPI 34 35 /* ··· 427 426 relocate_kernel_flags |= RELOC_KERNEL_PRESERVE_CONTEXT; 428 427 429 428 /* 430 - * This must be done before load_segments() since if call depth tracking 431 - * is used then GS must be valid to make any function calls. 429 + * This must be done before load_segments() since it resets 430 + * GS to 0 and percpu data needs the correct GS to work. 432 431 */ 433 - if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) 434 - relocate_kernel_flags |= RELOC_KERNEL_HOST_MEM_ENC_ACTIVE; 432 + if (this_cpu_read(cache_state_incoherent)) 433 + relocate_kernel_flags |= RELOC_KERNEL_CACHE_INCOHERENT; 435 434 436 435 /* 437 436 * The segment registers are funny things, they have both a ··· 442 441 * 443 442 * Take advantage of this here by force loading the segments, 444 443 * before the GDT is zapped with an invalid value. 444 + * 445 + * load_segments() resets GS to 0. Don't make any function call 446 + * after here since call depth tracking uses percpu variables to 447 + * operate (relocate_kernel() is explicitly ignored by call depth 448 + * tracking). 445 449 */ 446 450 load_segments(); 447 451
+11 -13
arch/x86/kernel/process.c
··· 89 89 EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid); 90 90 91 91 /* 92 + * The cache may be in an incoherent state and needs flushing during kexec. 93 + * E.g., on SME/TDX platforms, dirty cacheline aliases with and without 94 + * encryption bit(s) can coexist and the cache needs to be flushed before 95 + * booting to the new kernel to avoid the silent memory corruption due to 96 + * dirty cachelines with different encryption property being written back 97 + * to the memory. 98 + */ 99 + DEFINE_PER_CPU(bool, cache_state_incoherent); 100 + 101 + /* 92 102 * this gets called so that we can store lazy state into memory and copy the 93 103 * current task into the new thread. 94 104 */ ··· 837 827 disable_local_APIC(); 838 828 mcheck_cpu_clear(c); 839 829 840 - /* 841 - * Use wbinvd on processors that support SME. This provides support 842 - * for performing a successful kexec when going from SME inactive 843 - * to SME active (or vice-versa). The cache must be cleared so that 844 - * if there are entries with the same physical address, both with and 845 - * without the encryption bit, they don't race each other when flushed 846 - * and potentially end up with the wrong entry being committed to 847 - * memory. 848 - * 849 - * Test the CPUID bit directly because the machine might've cleared 850 - * X86_FEATURE_SME due to cmdline options. 851 - */ 852 - if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0))) 830 + if (this_cpu_read(cache_state_incoherent)) 853 831 wbinvd(); 854 832 855 833 /*
+10 -3
arch/x86/kernel/relocate_kernel_64.S
··· 198 198 movq %r9, %cr3 199 199 200 200 /* 201 + * If the memory cache is in incoherent state, e.g., due to 202 + * memory encryption, do WBINVD to flush cache. 203 + * 201 204 * If SME is active, there could be old encrypted cache line 202 205 * entries that will conflict with the now unencrypted memory 203 206 * used by kexec. Flush the caches before copying the kernel. 207 + * 208 + * Note SME sets this flag to true when the platform supports 209 + * SME, so the WBINVD is performed even SME is not activated 210 + * by the kernel. But this has no harm. 204 211 */ 205 - testb $RELOC_KERNEL_HOST_MEM_ENC_ACTIVE, %r11b 206 - jz .Lsme_off 212 + testb $RELOC_KERNEL_CACHE_INCOHERENT, %r11b 213 + jz .Lnowbinvd 207 214 wbinvd 208 - .Lsme_off: 215 + .Lnowbinvd: 209 216 210 217 call swap_pages 211 218