Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/srso: Add a Speculative RAS Overflow mitigation

Add a mitigation for the speculative return address stack overflow
vulnerability found on AMD processors.

The mitigation works by ensuring all RET instructions speculate to
a controlled location, similar to how speculation is controlled in the
retpoline sequence. To accomplish this, the __x86_return_thunk forces
the CPU to mispredict every function return using a 'safe return'
sequence.

To ensure the safety of this mitigation, the kernel must ensure that the
safe return sequence is itself free from attacker interference. In Zen3
and Zen4, this is accomplished by creating a BTB alias between the
untraining function srso_untrain_ret_alias() and the safe return
function srso_safe_ret_alias() which results in evicting a potentially
poisoned BTB entry and using that safe one for all function returns.

In older Zen1 and Zen2, this is accomplished using a reinterpretation
technique similar to Retbleed one: srso_untrain_ret() and
srso_safe_ret().

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>

+422 -10
+1
Documentation/admin-guide/hw-vuln/index.rst
··· 19 19 l1d_flush.rst 20 20 processor_mmio_stale_data.rst 21 21 cross-thread-rsb.rst 22 + srso
+133
Documentation/admin-guide/hw-vuln/srso.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Speculative Return Stack Overflow (SRSO) 4 + ======================================== 5 + 6 + This is a mitigation for the speculative return stack overflow (SRSO) 7 + vulnerability found on AMD processors. The mechanism is by now the well 8 + known scenario of poisoning CPU functional units - the Branch Target 9 + Buffer (BTB) and Return Address Predictor (RAP) in this case - and then 10 + tricking the elevated privilege domain (the kernel) into leaking 11 + sensitive data. 12 + 13 + AMD CPUs predict RET instructions using a Return Address Predictor (aka 14 + Return Address Stack/Return Stack Buffer). In some cases, a non-architectural 15 + CALL instruction (i.e., an instruction predicted to be a CALL but is 16 + not actually a CALL) can create an entry in the RAP which may be used 17 + to predict the target of a subsequent RET instruction. 18 + 19 + The specific circumstances that lead to this varies by microarchitecture 20 + but the concern is that an attacker can mis-train the CPU BTB to predict 21 + non-architectural CALL instructions in kernel space and use this to 22 + control the speculative target of a subsequent kernel RET, potentially 23 + leading to information disclosure via a speculative side-channel. 24 + 25 + The issue is tracked under CVE-2023-20569. 26 + 27 + Affected processors 28 + ------------------- 29 + 30 + AMD Zen, generations 1-4. That is, all families 0x17 and 0x19. Older 31 + processors have not been investigated. 32 + 33 + System information and options 34 + ------------------------------ 35 + 36 + First of all, it is required that the latest microcode be loaded for 37 + mitigations to be effective. 38 + 39 + The sysfs file showing SRSO mitigation status is: 40 + 41 + /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow 42 + 43 + The possible values in this file are: 44 + 45 + - 'Not affected' The processor is not vulnerable 46 + 47 + - 'Vulnerable: no microcode' The processor is vulnerable, no 48 + microcode extending IBPB functionality 49 + to address the vulnerability has been 50 + applied. 51 + 52 + - 'Mitigation: microcode' Extended IBPB functionality microcode 53 + patch has been applied. It does not 54 + address User->Kernel and Guest->Host 55 + transitions protection but it does 56 + address User->User and VM->VM attack 57 + vectors. 58 + 59 + (spec_rstack_overflow=microcode) 60 + 61 + - 'Mitigation: safe RET' Software-only mitigation. It complements 62 + the extended IBPB microcode patch 63 + functionality by addressing User->Kernel 64 + and Guest->Host transitions protection. 65 + 66 + Selected by default or by 67 + spec_rstack_overflow=safe-ret 68 + 69 + - 'Mitigation: IBPB' Similar protection as "safe RET" above 70 + but employs an IBPB barrier on privilege 71 + domain crossings (User->Kernel, 72 + Guest->Host). 73 + 74 + (spec_rstack_overflow=ibpb) 75 + 76 + - 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider 77 + scenario - the Guest->Host transitions 78 + only. 79 + 80 + (spec_rstack_overflow=ibpb-vmexit) 81 + 82 + In order to exploit vulnerability, an attacker needs to: 83 + 84 + - gain local access on the machine 85 + 86 + - break kASLR 87 + 88 + - find gadgets in the running kernel in order to use them in the exploit 89 + 90 + - potentially create and pin an additional workload on the sibling 91 + thread, depending on the microarchitecture (not necessary on fam 0x19) 92 + 93 + - run the exploit 94 + 95 + Considering the performance implications of each mitigation type, the 96 + default one is 'Mitigation: safe RET' which should take care of most 97 + attack vectors, including the local User->Kernel one. 98 + 99 + As always, the user is advised to keep her/his system up-to-date by 100 + applying software updates regularly. 101 + 102 + The default setting will be reevaluated when needed and especially when 103 + new attack vectors appear. 104 + 105 + As one can surmise, 'Mitigation: safe RET' does come at the cost of some 106 + performance depending on the workload. If one trusts her/his userspace 107 + and does not want to suffer the performance impact, one can always 108 + disable the mitigation with spec_rstack_overflow=off. 109 + 110 + Similarly, 'Mitigation: IBPB' is another full mitigation type employing 111 + an indrect branch prediction barrier after having applied the required 112 + microcode patch for one's system. This mitigation comes also at 113 + a performance cost. 114 + 115 + Mitigation: safe RET 116 + -------------------- 117 + 118 + The mitigation works by ensuring all RET instructions speculate to 119 + a controlled location, similar to how speculation is controlled in the 120 + retpoline sequence. To accomplish this, the __x86_return_thunk forces 121 + the CPU to mispredict every function return using a 'safe return' 122 + sequence. 123 + 124 + To ensure the safety of this mitigation, the kernel must ensure that the 125 + safe return sequence is itself free from attacker interference. In Zen3 126 + and Zen4, this is accomplished by creating a BTB alias between the 127 + untraining function srso_untrain_ret_alias() and the safe return 128 + function srso_safe_ret_alias() which results in evicting a potentially 129 + poisoned BTB entry and using that safe one for all function returns. 130 + 131 + In older Zen1 and Zen2, this is accomplished using a reinterpretation 132 + technique similar to Retbleed one: srso_untrain_ret() and 133 + srso_safe_ret().
+11
Documentation/admin-guide/kernel-parameters.txt
··· 5875 5875 Not specifying this option is equivalent to 5876 5876 spectre_v2_user=auto. 5877 5877 5878 + spec_rstack_overflow= 5879 + [X86] Control RAS overflow mitigation on AMD Zen CPUs 5880 + 5881 + off - Disable mitigation 5882 + microcode - Enable microcode mitigation only 5883 + safe-ret - Enable sw-only safe RET mitigation (default) 5884 + ibpb - Enable mitigation by issuing IBPB on 5885 + kernel entry 5886 + ibpb-vmexit - Issue IBPB only on VMEXIT 5887 + (cloud-specific mitigation) 5888 + 5878 5889 spec_store_bypass_disable= 5879 5890 [HW] Control Speculative Store Bypass (SSB) Disable mitigation 5880 5891 (Speculative Store Bypass vulnerability)
+7
arch/x86/Kconfig
··· 2593 2593 This mitigates both spectre_v2 and retbleed at great cost to 2594 2594 performance. 2595 2595 2596 + config CPU_SRSO 2597 + bool "Mitigate speculative RAS overflow on AMD" 2598 + depends on CPU_SUP_AMD && X86_64 && RETHUNK 2599 + default y 2600 + help 2601 + Enable the SRSO mitigation needed on AMD Zen1-4 machines. 2602 + 2596 2603 config SLS 2597 2604 bool "Mitigate Straight-Line-Speculation" 2598 2605 depends on CC_HAS_SLS && X86_64
+5
arch/x86/include/asm/cpufeatures.h
··· 309 309 #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */ 310 310 #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */ 311 311 312 + #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ 313 + #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ 314 + 312 315 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ 313 316 #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ 314 317 #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ ··· 487 484 #define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ 488 485 #define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */ 489 486 487 + /* BUG word 2 */ 488 + #define X86_BUG_SRSO X86_BUG(1*32 + 0) /* AMD SRSO bug */ 490 489 #endif /* _ASM_X86_CPUFEATURES_H */
+14 -1
arch/x86/include/asm/nospec-branch.h
··· 211 211 * eventually turn into it's own annotation. 212 212 */ 213 213 .macro VALIDATE_UNRET_END 214 - #if defined(CONFIG_NOINSTR_VALIDATION) && defined(CONFIG_CPU_UNRET_ENTRY) 214 + #if defined(CONFIG_NOINSTR_VALIDATION) && \ 215 + (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)) 215 216 ANNOTATE_RETPOLINE_SAFE 216 217 nop 217 218 #endif ··· 297 296 "call entry_ibpb", X86_FEATURE_ENTRY_IBPB, \ 298 297 __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH 299 298 #endif 299 + 300 + #ifdef CONFIG_CPU_SRSO 301 + ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \ 302 + "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS 303 + #endif 300 304 .endm 301 305 302 306 .macro UNTRAIN_RET_FROM_CALL ··· 312 306 CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \ 313 307 "call entry_ibpb", X86_FEATURE_ENTRY_IBPB, \ 314 308 __stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH 309 + #endif 310 + 311 + #ifdef CONFIG_CPU_SRSO 312 + ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \ 313 + "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS 315 314 #endif 316 315 .endm 317 316 ··· 343 332 344 333 extern void __x86_return_thunk(void); 345 334 extern void zen_untrain_ret(void); 335 + extern void srso_untrain_ret(void); 336 + extern void srso_untrain_ret_alias(void); 346 337 extern void entry_ibpb(void); 347 338 348 339 #ifdef CONFIG_CALL_THUNKS
+2
arch/x86/include/asm/processor.h
··· 682 682 #ifdef CONFIG_CPU_SUP_AMD 683 683 extern u32 amd_get_nodes_per_socket(void); 684 684 extern u32 amd_get_highest_perf(void); 685 + extern bool cpu_has_ibpb_brtype_microcode(void); 685 686 #else 686 687 static inline u32 amd_get_nodes_per_socket(void) { return 0; } 687 688 static inline u32 amd_get_highest_perf(void) { return 0; } 689 + static inline bool cpu_has_ibpb_brtype_microcode(void) { return false; } 688 690 #endif 689 691 690 692 extern unsigned long arch_align_stack(unsigned long sp);
+3 -1
arch/x86/kernel/alternative.c
··· 707 707 int i = 0; 708 708 709 709 /* Patch the custom return thunks... */ 710 - if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { 710 + if (cpu_feature_enabled(X86_FEATURE_RETHUNK) || 711 + cpu_feature_enabled(X86_FEATURE_SRSO) || 712 + cpu_feature_enabled(X86_FEATURE_SRSO_ALIAS)) { 711 713 i = JMP32_INSN_SIZE; 712 714 __text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i); 713 715 } else {
+14
arch/x86/kernel/cpu/amd.c
··· 1235 1235 return 255; 1236 1236 } 1237 1237 EXPORT_SYMBOL_GPL(amd_get_highest_perf); 1238 + 1239 + bool cpu_has_ibpb_brtype_microcode(void) 1240 + { 1241 + u8 fam = boot_cpu_data.x86; 1242 + 1243 + if (fam == 0x17) { 1244 + /* Zen1/2 IBPB flushes branch type predictions too. */ 1245 + return boot_cpu_has(X86_FEATURE_AMD_IBPB); 1246 + } else if (fam == 0x19) { 1247 + return false; 1248 + } 1249 + 1250 + return false; 1251 + }
+106
arch/x86/kernel/cpu/bugs.c
··· 47 47 static void __init mmio_select_mitigation(void); 48 48 static void __init srbds_select_mitigation(void); 49 49 static void __init l1d_flush_select_mitigation(void); 50 + static void __init srso_select_mitigation(void); 50 51 51 52 /* The base value of the SPEC_CTRL MSR without task-specific bits set */ 52 53 u64 x86_spec_ctrl_base; ··· 161 160 md_clear_select_mitigation(); 162 161 srbds_select_mitigation(); 163 162 l1d_flush_select_mitigation(); 163 + srso_select_mitigation(); 164 164 } 165 165 166 166 /* ··· 2188 2186 early_param("l1tf", l1tf_cmdline); 2189 2187 2190 2188 #undef pr_fmt 2189 + #define pr_fmt(fmt) "Speculative Return Stack Overflow: " fmt 2190 + 2191 + enum srso_mitigation { 2192 + SRSO_MITIGATION_NONE, 2193 + SRSO_MITIGATION_MICROCODE, 2194 + SRSO_MITIGATION_SAFE_RET, 2195 + }; 2196 + 2197 + enum srso_mitigation_cmd { 2198 + SRSO_CMD_OFF, 2199 + SRSO_CMD_MICROCODE, 2200 + SRSO_CMD_SAFE_RET, 2201 + }; 2202 + 2203 + static const char * const srso_strings[] = { 2204 + [SRSO_MITIGATION_NONE] = "Vulnerable", 2205 + [SRSO_MITIGATION_MICROCODE] = "Mitigation: microcode", 2206 + [SRSO_MITIGATION_SAFE_RET] = "Mitigation: safe RET", 2207 + }; 2208 + 2209 + static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE; 2210 + static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET; 2211 + 2212 + static int __init srso_parse_cmdline(char *str) 2213 + { 2214 + if (!str) 2215 + return -EINVAL; 2216 + 2217 + if (!strcmp(str, "off")) 2218 + srso_cmd = SRSO_CMD_OFF; 2219 + else if (!strcmp(str, "microcode")) 2220 + srso_cmd = SRSO_CMD_MICROCODE; 2221 + else if (!strcmp(str, "safe-ret")) 2222 + srso_cmd = SRSO_CMD_SAFE_RET; 2223 + else 2224 + pr_err("Ignoring unknown SRSO option (%s).", str); 2225 + 2226 + return 0; 2227 + } 2228 + early_param("spec_rstack_overflow", srso_parse_cmdline); 2229 + 2230 + #define SRSO_NOTICE "WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options." 2231 + 2232 + static void __init srso_select_mitigation(void) 2233 + { 2234 + bool has_microcode; 2235 + 2236 + if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off()) 2237 + return; 2238 + 2239 + has_microcode = cpu_has_ibpb_brtype_microcode(); 2240 + if (!has_microcode) { 2241 + pr_warn("IBPB-extending microcode not applied!\n"); 2242 + pr_warn(SRSO_NOTICE); 2243 + } 2244 + 2245 + switch (srso_cmd) { 2246 + case SRSO_CMD_OFF: 2247 + return; 2248 + 2249 + case SRSO_CMD_MICROCODE: 2250 + if (has_microcode) { 2251 + srso_mitigation = SRSO_MITIGATION_MICROCODE; 2252 + pr_warn(SRSO_NOTICE); 2253 + } 2254 + break; 2255 + 2256 + case SRSO_CMD_SAFE_RET: 2257 + if (IS_ENABLED(CONFIG_CPU_SRSO)) { 2258 + if (boot_cpu_data.x86 == 0x19) 2259 + setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); 2260 + else 2261 + setup_force_cpu_cap(X86_FEATURE_SRSO); 2262 + srso_mitigation = SRSO_MITIGATION_SAFE_RET; 2263 + } else { 2264 + pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); 2265 + return; 2266 + } 2267 + break; 2268 + 2269 + default: 2270 + break; 2271 + 2272 + } 2273 + 2274 + pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode")); 2275 + } 2276 + 2277 + #undef pr_fmt 2191 2278 #define pr_fmt(fmt) fmt 2192 2279 2193 2280 #ifdef CONFIG_SYSFS ··· 2473 2382 return sysfs_emit(buf, "%s\n", retbleed_strings[retbleed_mitigation]); 2474 2383 } 2475 2384 2385 + static ssize_t srso_show_state(char *buf) 2386 + { 2387 + return sysfs_emit(buf, "%s%s\n", 2388 + srso_strings[srso_mitigation], 2389 + (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode")); 2390 + } 2391 + 2476 2392 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 2477 2393 char *buf, unsigned int bug) 2478 2394 { ··· 2528 2430 2529 2431 case X86_BUG_RETBLEED: 2530 2432 return retbleed_show_state(buf); 2433 + 2434 + case X86_BUG_SRSO: 2435 + return srso_show_state(buf); 2531 2436 2532 2437 default: 2533 2438 break; ··· 2595 2494 ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf) 2596 2495 { 2597 2496 return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED); 2497 + } 2498 + 2499 + ssize_t cpu_show_spec_rstack_overflow(struct device *dev, struct device_attribute *attr, char *buf) 2500 + { 2501 + return cpu_show_common(dev, attr, buf, X86_BUG_SRSO); 2598 2502 } 2599 2503 #endif
+7 -1
arch/x86/kernel/cpu/common.c
··· 1250 1250 #define RETBLEED BIT(3) 1251 1251 /* CPU is affected by SMT (cross-thread) return predictions */ 1252 1252 #define SMT_RSB BIT(4) 1253 + /* CPU is affected by SRSO */ 1254 + #define SRSO BIT(5) 1253 1255 1254 1256 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { 1255 1257 VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), ··· 1283 1281 1284 1282 VULNBL_AMD(0x15, RETBLEED), 1285 1283 VULNBL_AMD(0x16, RETBLEED), 1286 - VULNBL_AMD(0x17, RETBLEED | SMT_RSB), 1284 + VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), 1287 1285 VULNBL_HYGON(0x18, RETBLEED | SMT_RSB), 1286 + VULNBL_AMD(0x19, SRSO), 1288 1287 {} 1289 1288 }; 1290 1289 ··· 1408 1405 1409 1406 if (cpu_matches(cpu_vuln_blacklist, SMT_RSB)) 1410 1407 setup_force_cpu_bug(X86_BUG_SMT_RSB); 1408 + 1409 + if (cpu_matches(cpu_vuln_blacklist, SRSO)) 1410 + setup_force_cpu_bug(X86_BUG_SRSO); 1411 1411 1412 1412 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) 1413 1413 return;
+27 -2
arch/x86/kernel/vmlinux.lds.S
··· 134 134 SOFTIRQENTRY_TEXT 135 135 #ifdef CONFIG_RETPOLINE 136 136 __indirect_thunk_start = .; 137 - *(.text.__x86.*) 137 + *(.text.__x86.indirect_thunk) 138 + *(.text.__x86.return_thunk) 138 139 __indirect_thunk_end = .; 139 140 #endif 140 141 STATIC_CALL_TEXT 141 142 142 143 ALIGN_ENTRY_TEXT_BEGIN 144 + #ifdef CONFIG_CPU_SRSO 145 + *(.text.__x86.rethunk_untrain) 146 + #endif 147 + 143 148 ENTRY_TEXT 149 + 150 + #ifdef CONFIG_CPU_SRSO 151 + /* 152 + * See the comment above srso_untrain_ret_alias()'s 153 + * definition. 154 + */ 155 + . = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20); 156 + *(.text.__x86.rethunk_safe) 157 + #endif 144 158 ALIGN_ENTRY_TEXT_END 145 159 *(.gnu.warning) 146 160 ··· 523 509 #endif 524 510 525 511 #ifdef CONFIG_RETHUNK 526 - . = ASSERT((__x86_return_thunk & 0x3f) == 0, "__x86_return_thunk not cacheline-aligned"); 512 + . = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned"); 513 + . = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned"); 514 + #endif 515 + 516 + #ifdef CONFIG_CPU_SRSO 517 + /* 518 + * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR 519 + * of the two function addresses: 520 + */ 521 + . = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) - 522 + (srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), 523 + "SRSO function pair won't alias"); 527 524 #endif 528 525 529 526 #endif /* CONFIG_X86_64 */
+78 -4
arch/x86/lib/retpoline.S
··· 11 11 #include <asm/unwind_hints.h> 12 12 #include <asm/percpu.h> 13 13 #include <asm/frame.h> 14 + #include <asm/nops.h> 14 15 15 16 .section .text.__x86.indirect_thunk 16 17 ··· 132 131 */ 133 132 #ifdef CONFIG_RETHUNK 134 133 134 + /* 135 + * srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at 136 + * special addresses: 137 + * 138 + * - srso_untrain_ret_alias() is 2M aligned 139 + * - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14 140 + * and 20 in its virtual address are set (while those bits in the 141 + * srso_untrain_ret_alias() function are cleared). 142 + * 143 + * This guarantees that those two addresses will alias in the branch 144 + * target buffer of Zen3/4 generations, leading to any potential 145 + * poisoned entries at that BTB slot to get evicted. 146 + * 147 + * As a result, srso_safe_ret_alias() becomes a safe return. 148 + */ 149 + #ifdef CONFIG_CPU_SRSO 150 + .section .text.__x86.rethunk_untrain 151 + 152 + SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) 153 + ASM_NOP2 154 + lfence 155 + jmp __x86_return_thunk 156 + SYM_FUNC_END(srso_untrain_ret_alias) 157 + __EXPORT_THUNK(srso_untrain_ret_alias) 158 + 159 + .section .text.__x86.rethunk_safe 160 + #endif 161 + 162 + /* Needs a definition for the __x86_return_thunk alternative below. */ 163 + SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) 164 + #ifdef CONFIG_CPU_SRSO 165 + add $8, %_ASM_SP 166 + UNWIND_HINT_FUNC 167 + #endif 168 + ANNOTATE_UNRET_SAFE 169 + ret 170 + int3 171 + SYM_FUNC_END(srso_safe_ret_alias) 172 + 135 173 .section .text.__x86.return_thunk 136 174 137 175 /* ··· 183 143 * from re-poisioning the BTB prediction. 184 144 */ 185 145 .align 64 186 - .skip 64 - (__x86_return_thunk - zen_untrain_ret), 0xcc 146 + .skip 64 - (__ret - zen_untrain_ret), 0xcc 187 147 SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) 188 148 ANNOTATE_NOENDBR 189 149 /* ··· 215 175 * evicted, __x86_return_thunk will suffer Straight Line Speculation 216 176 * which will be contained safely by the INT3. 217 177 */ 218 - SYM_INNER_LABEL(__x86_return_thunk, SYM_L_GLOBAL) 178 + SYM_INNER_LABEL(__ret, SYM_L_GLOBAL) 219 179 ret 220 180 int3 221 - SYM_CODE_END(__x86_return_thunk) 181 + SYM_CODE_END(__ret) 222 182 223 183 /* 224 184 * Ensure the TEST decoding / BTB invalidation is complete. ··· 229 189 * Jump back and execute the RET in the middle of the TEST instruction. 230 190 * INT3 is for SLS protection. 231 191 */ 232 - jmp __x86_return_thunk 192 + jmp __ret 233 193 int3 234 194 SYM_FUNC_END(zen_untrain_ret) 235 195 __EXPORT_THUNK(zen_untrain_ret) 236 196 197 + /* 198 + * SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret() 199 + * above. On kernel entry, srso_untrain_ret() is executed which is a 200 + * 201 + * movabs $0xccccccc308c48348,%rax 202 + * 203 + * and when the return thunk executes the inner label srso_safe_ret() 204 + * later, it is a stack manipulation and a RET which is mispredicted and 205 + * thus a "safe" one to use. 206 + */ 207 + .align 64 208 + .skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc 209 + SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) 210 + ANNOTATE_NOENDBR 211 + .byte 0x48, 0xb8 212 + 213 + SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) 214 + add $8, %_ASM_SP 215 + ret 216 + int3 217 + int3 218 + int3 219 + lfence 220 + call srso_safe_ret 221 + int3 222 + SYM_CODE_END(srso_safe_ret) 223 + SYM_FUNC_END(srso_untrain_ret) 224 + __EXPORT_THUNK(srso_untrain_ret) 225 + 226 + SYM_FUNC_START(__x86_return_thunk) 227 + ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \ 228 + "call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS 229 + int3 230 + SYM_CODE_END(__x86_return_thunk) 237 231 EXPORT_SYMBOL(__x86_return_thunk) 238 232 239 233 #endif /* CONFIG_RETHUNK */
+8
drivers/base/cpu.c
··· 577 577 return sysfs_emit(buf, "Not affected\n"); 578 578 } 579 579 580 + ssize_t __weak cpu_show_spec_rstack_overflow(struct device *dev, 581 + struct device_attribute *attr, char *buf) 582 + { 583 + return sysfs_emit(buf, "Not affected\n"); 584 + } 585 + 580 586 static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); 581 587 static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); 582 588 static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); ··· 594 588 static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); 595 589 static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); 596 590 static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL); 591 + static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL); 597 592 598 593 static struct attribute *cpu_root_vulnerabilities_attrs[] = { 599 594 &dev_attr_meltdown.attr, ··· 608 601 &dev_attr_srbds.attr, 609 602 &dev_attr_mmio_stale_data.attr, 610 603 &dev_attr_retbleed.attr, 604 + &dev_attr_spec_rstack_overflow.attr, 611 605 NULL 612 606 }; 613 607
+2
include/linux/cpu.h
··· 70 70 char *buf); 71 71 extern ssize_t cpu_show_retbleed(struct device *dev, 72 72 struct device_attribute *attr, char *buf); 73 + extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev, 74 + struct device_attribute *attr, char *buf); 73 75 74 76 extern __printf(4, 5) 75 77 struct device *cpu_device_create(struct device *parent, void *drvdata,
+4 -1
tools/objtool/arch/x86/decode.c
··· 824 824 825 825 bool arch_is_rethunk(struct symbol *sym) 826 826 { 827 - return !strcmp(sym->name, "__x86_return_thunk"); 827 + return !strcmp(sym->name, "__x86_return_thunk") || 828 + !strcmp(sym->name, "srso_untrain_ret") || 829 + !strcmp(sym->name, "srso_safe_ret") || 830 + !strcmp(sym->name, "__ret"); 828 831 }