Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/lib: Add WBNOINVD helper functions

In line with WBINVD usage, add WBNOINVD helper functions. Explicitly fall
back to WBINVD (via alternative()) if WBNOINVD isn't supported even though
the instruction itself is backwards compatible (WBNOINVD is WBINVD with an
ignored REP prefix), so that disabling X86_FEATURE_WBNOINVD behaves as one
would expect, e.g. in case there's a hardware issue that affects WBNOINVD.

Opportunistically, add comments explaining the architectural behavior of
WBINVD and WBNOINVD, and provide hints and pointers to uarch-specific
behavior.

Note, alternative() ensures compatibility with early boot code as needed.

[ bp: Massage, fix typos, make export _GPL. ]

Signed-off-by: Kevin Loughlin <kevinloughlin@google.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/20250522233733.3176144-4-seanjc@google.com

authored by

Kevin Loughlin and committed by
Borislav Petkov (AMD)
07f99c3f e6380817

+45 -1
+6
arch/x86/include/asm/smp.h
··· 113 113 void play_dead_common(void); 114 114 void wbinvd_on_cpu(int cpu); 115 115 void wbinvd_on_all_cpus(void); 116 + void wbnoinvd_on_all_cpus(void); 116 117 117 118 void smp_kick_mwait_play_dead(void); 118 119 void __noreturn mwait_play_dead(unsigned int eax_hint); ··· 152 151 static inline void wbinvd_on_all_cpus(void) 153 152 { 154 153 wbinvd(); 154 + } 155 + 156 + static inline void wbnoinvd_on_all_cpus(void) 157 + { 158 + wbnoinvd(); 155 159 } 156 160 157 161 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
+28 -1
arch/x86/include/asm/special_insns.h
··· 104 104 } 105 105 #endif 106 106 107 + /* 108 + * Write back all modified lines in all levels of cache associated with this 109 + * logical processor to main memory, and then invalidate all caches. Depending 110 + * on the micro-architecture, WBINVD (and WBNOINVD below) may or may not affect 111 + * lower level caches associated with another logical processor that shares any 112 + * level of this processor's cache hierarchy. 113 + */ 107 114 static __always_inline void wbinvd(void) 108 115 { 109 - asm volatile("wbinvd": : :"memory"); 116 + asm volatile("wbinvd" : : : "memory"); 117 + } 118 + 119 + /* Instruction encoding provided for binutils backwards compatibility. */ 120 + #define ASM_WBNOINVD _ASM_BYTES(0xf3,0x0f,0x09) 121 + 122 + /* 123 + * Write back all modified lines in all levels of cache associated with this 124 + * logical processor to main memory, but do NOT explicitly invalidate caches, 125 + * i.e. leave all/most cache lines in the hierarchy in non-modified state. 126 + */ 127 + static __always_inline void wbnoinvd(void) 128 + { 129 + /* 130 + * Explicitly encode WBINVD if X86_FEATURE_WBNOINVD is unavailable even 131 + * though WBNOINVD is backwards compatible (it's simply WBINVD with an 132 + * ignored REP prefix), to guarantee that WBNOINVD isn't used if it 133 + * needs to be avoided for any reason. For all supported usage in the 134 + * kernel, WBINVD is functionally a superset of WBNOINVD. 135 + */ 136 + alternative("wbinvd", ASM_WBNOINVD, X86_FEATURE_WBNOINVD); 110 137 } 111 138 112 139 static inline unsigned long __read_cr4(void)
+11
arch/x86/lib/cache-smp.c
··· 19 19 on_each_cpu(__wbinvd, NULL, 1); 20 20 } 21 21 EXPORT_SYMBOL(wbinvd_on_all_cpus); 22 + 23 + static void __wbnoinvd(void *dummy) 24 + { 25 + wbnoinvd(); 26 + } 27 + 28 + void wbnoinvd_on_all_cpus(void) 29 + { 30 + on_each_cpu(__wbnoinvd, NULL, 1); 31 + } 32 + EXPORT_SYMBOL_GPL(wbnoinvd_on_all_cpus);