Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'x86_bugs_for_v6.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 CPU mitigation updates from Borislav Petkov:

- Untangle the Retbleed from the ITS mitigation on Intel. Allow for ITS
to enable stuffing independently from Retbleed, do some cleanups to
simplify and streamline the code

- Simplify SRSO and make mitigation types selection more versatile
depending on the Retbleed mitigation selection. Simplify code some

- Add the second part of the attack vector controls which provide a lot
friendlier user interface to the speculation mitigations than
selecting each one by one as it is now.

Instead, the selection of whole attack vectors which are relevant to
the system in use can be done and protection against only those
vectors is enabled, thus giving back some performance to the users

* tag 'x86_bugs_for_v6.17_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits)
x86/bugs: Print enabled attack vectors
x86/bugs: Add attack vector controls for TSA
x86/pti: Add attack vector controls for PTI
x86/bugs: Add attack vector controls for ITS
x86/bugs: Add attack vector controls for SRSO
x86/bugs: Add attack vector controls for L1TF
x86/bugs: Add attack vector controls for spectre_v2
x86/bugs: Add attack vector controls for BHI
x86/bugs: Add attack vector controls for spectre_v2_user
x86/bugs: Add attack vector controls for retbleed
x86/bugs: Add attack vector controls for spectre_v1
x86/bugs: Add attack vector controls for GDS
x86/bugs: Add attack vector controls for SRBDS
x86/bugs: Add attack vector controls for RFDS
x86/bugs: Add attack vector controls for MMIO
x86/bugs: Add attack vector controls for TAA
x86/bugs: Add attack vector controls for MDS
x86/bugs: Define attack vectors relevant for each bug
x86/Kconfig: Add arch attack vector support
cpu: Define attack vectors
...

+713 -154
+238
Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Attack Vector Controls 4 + ====================== 5 + 6 + Attack vector controls provide a simple method to configure only the mitigations 7 + for CPU vulnerabilities which are relevant given the intended use of a system. 8 + Administrators are encouraged to consider which attack vectors are relevant and 9 + disable all others in order to recoup system performance. 10 + 11 + When new relevant CPU vulnerabilities are found, they will be added to these 12 + attack vector controls so administrators will likely not need to reconfigure 13 + their command line parameters as mitigations will continue to be correctly 14 + applied based on the chosen attack vector controls. 15 + 16 + Attack Vectors 17 + -------------- 18 + 19 + There are 5 sets of attack-vector mitigations currently supported by the kernel: 20 + 21 + #. :ref:`user_kernel` 22 + #. :ref:`user_user` 23 + #. :ref:`guest_host` 24 + #. :ref:`guest_guest` 25 + #. :ref:`smt` 26 + 27 + To control the enabled attack vectors, see :ref:`cmdline`. 28 + 29 + .. _user_kernel: 30 + 31 + User-to-Kernel 32 + ^^^^^^^^^^^^^^ 33 + 34 + The user-to-kernel attack vector involves a malicious userspace program 35 + attempting to leak kernel data into userspace by exploiting a CPU vulnerability. 36 + The kernel data involved might be limited to certain kernel memory, or include 37 + all memory in the system, depending on the vulnerability exploited. 38 + 39 + If no untrusted userspace applications are being run, such as with single-user 40 + systems, consider disabling user-to-kernel mitigations. 41 + 42 + Note that the CPU vulnerabilities mitigated by Linux have generally not been 43 + shown to be exploitable from browser-based sandboxes. User-to-kernel 44 + mitigations are therefore mostly relevant if unknown userspace applications may 45 + be run by untrusted users. 46 + 47 + *user-to-kernel mitigations are enabled by default* 48 + 49 + .. _user_user: 50 + 51 + User-to-User 52 + ^^^^^^^^^^^^ 53 + 54 + The user-to-user attack vector involves a malicious userspace program attempting 55 + to influence the behavior of another unsuspecting userspace program in order to 56 + exfiltrate data. The vulnerability of a userspace program is based on the 57 + program itself and the interfaces it provides. 58 + 59 + If no untrusted userspace applications are being run, consider disabling 60 + user-to-user mitigations. 61 + 62 + Note that because the Linux kernel contains a mapping of all physical memory, 63 + preventing a malicious userspace program from leaking data from another 64 + userspace program requires mitigating user-to-kernel attacks as well for 65 + complete protection. 66 + 67 + *user-to-user mitigations are enabled by default* 68 + 69 + .. _guest_host: 70 + 71 + Guest-to-Host 72 + ^^^^^^^^^^^^^ 73 + 74 + The guest-to-host attack vector involves a malicious VM attempting to leak 75 + hypervisor data into the VM. The data involved may be limited, or may 76 + potentially include all memory in the system, depending on the vulnerability 77 + exploited. 78 + 79 + If no untrusted VMs are being run, consider disabling guest-to-host mitigations. 80 + 81 + *guest-to-host mitigations are enabled by default if KVM support is present* 82 + 83 + .. _guest_guest: 84 + 85 + Guest-to-Guest 86 + ^^^^^^^^^^^^^^ 87 + 88 + The guest-to-guest attack vector involves a malicious VM attempting to influence 89 + the behavior of another unsuspecting VM in order to exfiltrate data. The 90 + vulnerability of a VM is based on the code inside the VM itself and the 91 + interfaces it provides. 92 + 93 + If no untrusted VMs, or only a single VM is being run, consider disabling 94 + guest-to-guest mitigations. 95 + 96 + Similar to the user-to-user attack vector, preventing a malicious VM from 97 + leaking data from another VM requires mitigating guest-to-host attacks as well 98 + due to the Linux kernel phys map. 99 + 100 + *guest-to-guest mitigations are enabled by default if KVM support is present* 101 + 102 + .. _smt: 103 + 104 + Cross-Thread 105 + ^^^^^^^^^^^^ 106 + 107 + The cross-thread attack vector involves a malicious userspace program or 108 + malicious VM either observing or attempting to influence the behavior of code 109 + running on the SMT sibling thread in order to exfiltrate data. 110 + 111 + Many cross-thread attacks can only be mitigated if SMT is disabled, which will 112 + result in reduced CPU core count and reduced performance. 113 + 114 + If cross-thread mitigations are fully enabled ('auto,nosmt'), all mitigations 115 + for cross-thread attacks will be enabled. SMT may be disabled depending on 116 + which vulnerabilities are present in the CPU. 117 + 118 + If cross-thread mitigations are partially enabled ('auto'), mitigations for 119 + cross-thread attacks will be enabled but SMT will not be disabled. 120 + 121 + If cross-thread mitigations are disabled, no mitigations for cross-thread 122 + attacks will be enabled. 123 + 124 + Cross-thread mitigation may not be required if core-scheduling or similar 125 + techniques are used to prevent untrusted workloads from running on SMT siblings. 126 + 127 + *cross-thread mitigations default to partially enabled* 128 + 129 + .. _cmdline: 130 + 131 + Command Line Controls 132 + --------------------- 133 + 134 + Attack vectors are controlled through the mitigations= command line option. The 135 + value provided begins with a global option and then may optionally include one 136 + or more options to disable various attack vectors. 137 + 138 + Format: 139 + | ``mitigations=[global]`` 140 + | ``mitigations=[global],[attack vectors]`` 141 + 142 + Global options: 143 + 144 + ============ ============================================================= 145 + Option Description 146 + ============ ============================================================= 147 + 'off' All attack vectors disabled. 148 + 'auto' All attack vectors enabled, partial cross-thread mitigations. 149 + 'auto,nosmt' All attack vectors enabled, full cross-thread mitigations. 150 + ============ ============================================================= 151 + 152 + Attack vector options: 153 + 154 + ================= ======================================= 155 + Option Description 156 + ================= ======================================= 157 + 'no_user_kernel' Disables user-to-kernel mitigations. 158 + 'no_user_user' Disables user-to-user mitigations. 159 + 'no_guest_host' Disables guest-to-host mitigations. 160 + 'no_guest_guest' Disables guest-to-guest mitigations 161 + 'no_cross_thread' Disables all cross-thread mitigations. 162 + ================= ======================================= 163 + 164 + Multiple attack vector options may be specified in a comma-separated list. If 165 + the global option is not specified, it defaults to 'auto'. The global option 166 + 'off' is equivalent to disabling all attack vectors. 167 + 168 + Examples: 169 + | ``mitigations=auto,no_user_kernel`` 170 + 171 + Enable all attack vectors except user-to-kernel. Partial cross-thread 172 + mitigations. 173 + 174 + | ``mitigations=auto,nosmt,no_guest_host,no_guest_guest`` 175 + 176 + Enable all attack vectors and cross-thread mitigations except for 177 + guest-to-host and guest-to-guest mitigations. 178 + 179 + | ``mitigations=,no_cross_thread`` 180 + 181 + Enable all attack vectors but not cross-thread mitigations. 182 + 183 + Interactions with command-line options 184 + -------------------------------------- 185 + 186 + Vulnerability-specific controls (e.g. "retbleed=off") take precedence over all 187 + attack vector controls. Mitigations for individual vulnerabilities may be 188 + turned on or off via their command-line options regardless of the attack vector 189 + controls. 190 + 191 + Summary of attack-vector mitigations 192 + ------------------------------------ 193 + 194 + When a vulnerability is mitigated due to an attack-vector control, the default 195 + mitigation option for that particular vulnerability is used. To use a different 196 + mitigation, please use the vulnerability-specific command line option. 197 + 198 + The table below summarizes which vulnerabilities are mitigated when different 199 + attack vectors are enabled and assuming the CPU is vulnerable. 200 + 201 + =============== ============== ============ ============= ============== ============ ======== 202 + Vulnerability User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest Cross-Thread Notes 203 + =============== ============== ============ ============= ============== ============ ======== 204 + BHI X X 205 + ITS X X 206 + GDS X X X X * (Note 1) 207 + L1TF X X * (Note 2) 208 + MDS X X X X * (Note 2) 209 + MMIO X X X X * (Note 2) 210 + Meltdown X 211 + Retbleed X X * (Note 3) 212 + RFDS X X X X 213 + Spectre_v1 X 214 + Spectre_v2 X X 215 + Spectre_v2_user X X * (Note 1) 216 + SRBDS X X X X 217 + SRSO X X 218 + SSB (Note 4) 219 + TAA X X X X * (Note 2) 220 + TSA X X X X 221 + =============== ============== ============ ============= ============== ============ ======== 222 + 223 + Notes: 224 + 1 -- Can be mitigated without disabling SMT. 225 + 226 + 2 -- Disables SMT if cross-thread mitigations are fully enabled and the CPU 227 + is vulnerable 228 + 229 + 3 -- Disables SMT if cross-thread mitigations are fully enabled, the CPU is 230 + vulnerable, and STIBP is not supported 231 + 232 + 4 -- Speculative store bypass is always enabled by default (no kernel 233 + mitigation applied) unless overridden with spec_store_bypass_disable option 234 + 235 + When an attack-vector is disabled, all mitigations for the vulnerabilities 236 + listed in the above table are disabled, unless mitigation is required for a 237 + different enabled attack-vector or a mitigation is explicitly selected via a 238 + vulnerability-specific command line option.
+1
Documentation/admin-guide/hw-vuln/index.rst
··· 9 9 .. toctree:: 10 10 :maxdepth: 1 11 11 12 + attack_vector_controls 12 13 spectre 13 14 l1tf 14 15 mds
+4
Documentation/admin-guide/kernel-parameters.txt
··· 3797 3797 mmio_stale_data=full,nosmt [X86] 3798 3798 retbleed=auto,nosmt [X86] 3799 3799 3800 + [X86] After one of the above options, additionally 3801 + supports attack-vector based controls as documented in 3802 + Documentation/admin-guide/hw-vuln/attack_vector_controls.rst 3803 + 3800 3804 mminit_loglevel= 3801 3805 [KNL,EARLY] When CONFIG_DEBUG_MEMORY_INIT is set, this 3802 3806 parameter allows control of the logging verbosity for
+3
arch/Kconfig
··· 1772 1772 An architecture can select this if it provides arch/<arch>/tools/Makefile 1773 1773 with .arch.vmlinux.o target to be linked into vmlinux. 1774 1774 1775 + config ARCH_HAS_CPU_ATTACK_VECTORS 1776 + bool 1777 + 1775 1778 endmenu
+1
arch/x86/Kconfig
··· 75 75 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if (PGTABLE_LEVELS > 2) && (X86_64 || X86_PAE) 76 76 select ARCH_ENABLE_THP_MIGRATION if X86_64 && TRANSPARENT_HUGEPAGE 77 77 select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI 78 + select ARCH_HAS_CPU_ATTACK_VECTORS if CPU_MITIGATIONS 78 79 select ARCH_HAS_CACHE_LINE_SIZE 79 80 select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION 80 81 select ARCH_HAS_CPU_FINALIZE_INIT
+323 -142
arch/x86/kernel/cpu/bugs.c
··· 115 115 116 116 static void __init set_return_thunk(void *thunk) 117 117 { 118 - if (x86_return_thunk != __x86_return_thunk) 119 - pr_warn("x86/bugs: return thunk changed\n"); 120 - 121 118 x86_return_thunk = thunk; 119 + 120 + pr_info("active return thunk: %ps\n", thunk); 122 121 } 123 122 124 123 /* Update SPEC_CTRL MSR and its cached copy unconditionally */ ··· 189 190 DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); 190 191 EXPORT_SYMBOL_GPL(cpu_buf_vm_clear); 191 192 193 + #undef pr_fmt 194 + #define pr_fmt(fmt) "mitigations: " fmt 195 + 196 + static void __init cpu_print_attack_vectors(void) 197 + { 198 + pr_info("Enabled attack vectors: "); 199 + 200 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL)) 201 + pr_cont("user_kernel, "); 202 + 203 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER)) 204 + pr_cont("user_user, "); 205 + 206 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST)) 207 + pr_cont("guest_host, "); 208 + 209 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST)) 210 + pr_cont("guest_guest, "); 211 + 212 + pr_cont("SMT mitigations: "); 213 + 214 + switch (smt_mitigations) { 215 + case SMT_MITIGATIONS_OFF: 216 + pr_cont("off\n"); 217 + break; 218 + case SMT_MITIGATIONS_AUTO: 219 + pr_cont("auto\n"); 220 + break; 221 + case SMT_MITIGATIONS_ON: 222 + pr_cont("on\n"); 223 + } 224 + } 225 + 192 226 void __init cpu_select_mitigations(void) 193 227 { 194 228 /* ··· 241 209 } 242 210 243 211 x86_arch_cap_msr = x86_read_arch_cap_msr(); 212 + 213 + cpu_print_attack_vectors(); 244 214 245 215 /* Select the proper CPU mitigations before patching alternatives: */ 246 216 spectre_v1_select_mitigation(); ··· 367 333 #undef pr_fmt 368 334 #define pr_fmt(fmt) "MDS: " fmt 369 335 336 + /* 337 + * Returns true if vulnerability should be mitigated based on the 338 + * selected attack vector controls. 339 + * 340 + * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst 341 + */ 342 + static bool __init should_mitigate_vuln(unsigned int bug) 343 + { 344 + switch (bug) { 345 + /* 346 + * The only runtime-selected spectre_v1 mitigations in the kernel are 347 + * related to SWAPGS protection on kernel entry. Therefore, protection 348 + * is only required for the user->kernel attack vector. 349 + */ 350 + case X86_BUG_SPECTRE_V1: 351 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL); 352 + 353 + case X86_BUG_SPECTRE_V2: 354 + case X86_BUG_RETBLEED: 355 + case X86_BUG_SRSO: 356 + case X86_BUG_L1TF: 357 + case X86_BUG_ITS: 358 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL) || 359 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST); 360 + 361 + case X86_BUG_SPECTRE_V2_USER: 362 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER) || 363 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST); 364 + 365 + /* 366 + * All the vulnerabilities below allow potentially leaking data 367 + * across address spaces. Therefore, mitigation is required for 368 + * any of these 4 attack vectors. 369 + */ 370 + case X86_BUG_MDS: 371 + case X86_BUG_TAA: 372 + case X86_BUG_MMIO_STALE_DATA: 373 + case X86_BUG_RFDS: 374 + case X86_BUG_SRBDS: 375 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL) || 376 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST) || 377 + cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER) || 378 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST); 379 + 380 + case X86_BUG_GDS: 381 + return cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL) || 382 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST) || 383 + cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER) || 384 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST) || 385 + (smt_mitigations != SMT_MITIGATIONS_OFF); 386 + default: 387 + WARN(1, "Unknown bug %x\n", bug); 388 + return false; 389 + } 390 + } 391 + 370 392 /* Default mitigation for MDS-affected CPUs */ 371 393 static enum mds_mitigations mds_mitigation __ro_after_init = 372 394 IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF; ··· 476 386 477 387 static void __init mds_select_mitigation(void) 478 388 { 479 - if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) { 389 + if (!boot_cpu_has_bug(X86_BUG_MDS)) { 480 390 mds_mitigation = MDS_MITIGATION_OFF; 481 391 return; 482 392 } 483 393 484 - if (mds_mitigation == MDS_MITIGATION_AUTO) 485 - mds_mitigation = MDS_MITIGATION_FULL; 394 + if (mds_mitigation == MDS_MITIGATION_AUTO) { 395 + if (should_mitigate_vuln(X86_BUG_MDS)) 396 + mds_mitigation = MDS_MITIGATION_FULL; 397 + else 398 + mds_mitigation = MDS_MITIGATION_OFF; 399 + } 486 400 487 401 if (mds_mitigation == MDS_MITIGATION_OFF) 488 402 return; ··· 496 402 497 403 static void __init mds_update_mitigation(void) 498 404 { 499 - if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) 405 + if (!boot_cpu_has_bug(X86_BUG_MDS)) 500 406 return; 501 407 502 408 /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */ ··· 517 423 mds_mitigation == MDS_MITIGATION_VMWERV) { 518 424 setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 519 425 if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && 520 - (mds_nosmt || cpu_mitigations_auto_nosmt())) 426 + (mds_nosmt || smt_mitigations == SMT_MITIGATIONS_ON)) 521 427 cpu_smt_disable(false); 522 428 } 523 429 } ··· 573 479 return; 574 480 } 575 481 576 - if (cpu_mitigations_off()) 577 - taa_mitigation = TAA_MITIGATION_OFF; 578 - 579 482 /* Microcode will be checked in taa_update_mitigation(). */ 580 - if (taa_mitigation == TAA_MITIGATION_AUTO) 581 - taa_mitigation = TAA_MITIGATION_VERW; 483 + if (taa_mitigation == TAA_MITIGATION_AUTO) { 484 + if (should_mitigate_vuln(X86_BUG_TAA)) 485 + taa_mitigation = TAA_MITIGATION_VERW; 486 + else 487 + taa_mitigation = TAA_MITIGATION_OFF; 488 + } 582 489 583 490 if (taa_mitigation != TAA_MITIGATION_OFF) 584 491 verw_clear_cpu_buf_mitigation_selected = true; ··· 587 492 588 493 static void __init taa_update_mitigation(void) 589 494 { 590 - if (!taa_vulnerable() || cpu_mitigations_off()) 495 + if (!taa_vulnerable()) 591 496 return; 592 497 593 498 if (verw_clear_cpu_buf_mitigation_selected) ··· 628 533 */ 629 534 setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); 630 535 631 - if (taa_nosmt || cpu_mitigations_auto_nosmt()) 536 + if (taa_nosmt || smt_mitigations == SMT_MITIGATIONS_ON) 632 537 cpu_smt_disable(false); 633 538 } 634 539 } ··· 674 579 } 675 580 676 581 /* Microcode will be checked in mmio_update_mitigation(). */ 677 - if (mmio_mitigation == MMIO_MITIGATION_AUTO) 678 - mmio_mitigation = MMIO_MITIGATION_VERW; 582 + if (mmio_mitigation == MMIO_MITIGATION_AUTO) { 583 + if (should_mitigate_vuln(X86_BUG_MMIO_STALE_DATA)) 584 + mmio_mitigation = MMIO_MITIGATION_VERW; 585 + else 586 + mmio_mitigation = MMIO_MITIGATION_OFF; 587 + } 679 588 680 589 if (mmio_mitigation == MMIO_MITIGATION_OFF) 681 590 return; ··· 694 595 695 596 static void __init mmio_update_mitigation(void) 696 597 { 697 - if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || cpu_mitigations_off()) 598 + if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) 698 599 return; 699 600 700 601 if (verw_clear_cpu_buf_mitigation_selected) ··· 742 643 if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) 743 644 static_branch_enable(&cpu_buf_idle_clear); 744 645 745 - if (mmio_nosmt || cpu_mitigations_auto_nosmt()) 646 + if (mmio_nosmt || smt_mitigations == SMT_MITIGATIONS_ON) 746 647 cpu_smt_disable(false); 747 648 } 748 649 ··· 783 684 784 685 static void __init rfds_select_mitigation(void) 785 686 { 786 - if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) { 687 + if (!boot_cpu_has_bug(X86_BUG_RFDS)) { 787 688 rfds_mitigation = RFDS_MITIGATION_OFF; 788 689 return; 789 690 } 790 691 791 - if (rfds_mitigation == RFDS_MITIGATION_AUTO) 792 - rfds_mitigation = RFDS_MITIGATION_VERW; 692 + if (rfds_mitigation == RFDS_MITIGATION_AUTO) { 693 + if (should_mitigate_vuln(X86_BUG_RFDS)) 694 + rfds_mitigation = RFDS_MITIGATION_VERW; 695 + else 696 + rfds_mitigation = RFDS_MITIGATION_OFF; 697 + } 793 698 794 699 if (rfds_mitigation == RFDS_MITIGATION_OFF) 795 700 return; ··· 804 701 805 702 static void __init rfds_update_mitigation(void) 806 703 { 807 - if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) 704 + if (!boot_cpu_has_bug(X86_BUG_RFDS)) 808 705 return; 809 706 810 707 if (verw_clear_cpu_buf_mitigation_selected) ··· 905 802 906 803 static void __init srbds_select_mitigation(void) 907 804 { 908 - if (!boot_cpu_has_bug(X86_BUG_SRBDS) || cpu_mitigations_off()) { 805 + if (!boot_cpu_has_bug(X86_BUG_SRBDS)) { 909 806 srbds_mitigation = SRBDS_MITIGATION_OFF; 910 807 return; 911 808 } 912 809 913 - if (srbds_mitigation == SRBDS_MITIGATION_AUTO) 914 - srbds_mitigation = SRBDS_MITIGATION_FULL; 810 + if (srbds_mitigation == SRBDS_MITIGATION_AUTO) { 811 + if (should_mitigate_vuln(X86_BUG_SRBDS)) 812 + srbds_mitigation = SRBDS_MITIGATION_FULL; 813 + else { 814 + srbds_mitigation = SRBDS_MITIGATION_OFF; 815 + return; 816 + } 817 + } 915 818 916 819 /* 917 820 * Check to see if this is one of the MDS_NO systems supporting TSX that ··· 1065 956 return; 1066 957 } 1067 958 1068 - if (cpu_mitigations_off()) 1069 - gds_mitigation = GDS_MITIGATION_OFF; 1070 959 /* Will verify below that mitigation _can_ be disabled */ 1071 - 1072 - if (gds_mitigation == GDS_MITIGATION_AUTO) 1073 - gds_mitigation = GDS_MITIGATION_FULL; 960 + if (gds_mitigation == GDS_MITIGATION_AUTO) { 961 + if (should_mitigate_vuln(X86_BUG_GDS)) 962 + gds_mitigation = GDS_MITIGATION_FULL; 963 + else { 964 + gds_mitigation = GDS_MITIGATION_OFF; 965 + return; 966 + } 967 + } 1074 968 1075 969 /* No microcode */ 1076 970 if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) { ··· 1179 1067 1180 1068 static void __init spectre_v1_select_mitigation(void) 1181 1069 { 1182 - if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) 1070 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1)) 1071 + spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE; 1072 + 1073 + if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1)) 1183 1074 spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE; 1184 1075 } 1185 1076 1186 1077 static void __init spectre_v1_apply_mitigation(void) 1187 1078 { 1188 - if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) 1079 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1)) 1189 1080 return; 1190 1081 1191 1082 if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) { ··· 1239 1124 1240 1125 enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE; 1241 1126 1127 + /* Depends on spectre_v2 mitigation selected already */ 1128 + static inline bool cdt_possible(enum spectre_v2_mitigation mode) 1129 + { 1130 + if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) || 1131 + !IS_ENABLED(CONFIG_MITIGATION_RETPOLINE)) 1132 + return false; 1133 + 1134 + if (mode == SPECTRE_V2_RETPOLINE || 1135 + mode == SPECTRE_V2_EIBRS_RETPOLINE) 1136 + return true; 1137 + 1138 + return false; 1139 + } 1140 + 1242 1141 #undef pr_fmt 1243 1142 #define pr_fmt(fmt) "RETBleed: " fmt 1244 1143 ··· 1290 1161 IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE; 1291 1162 1292 1163 static int __ro_after_init retbleed_nosmt = false; 1164 + 1165 + enum srso_mitigation { 1166 + SRSO_MITIGATION_NONE, 1167 + SRSO_MITIGATION_AUTO, 1168 + SRSO_MITIGATION_UCODE_NEEDED, 1169 + SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED, 1170 + SRSO_MITIGATION_MICROCODE, 1171 + SRSO_MITIGATION_NOSMT, 1172 + SRSO_MITIGATION_SAFE_RET, 1173 + SRSO_MITIGATION_IBPB, 1174 + SRSO_MITIGATION_IBPB_ON_VMEXIT, 1175 + SRSO_MITIGATION_BP_SPEC_REDUCE, 1176 + }; 1177 + 1178 + static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO; 1293 1179 1294 1180 static int __init retbleed_parse_cmdline(char *str) 1295 1181 { ··· 1348 1204 1349 1205 static void __init retbleed_select_mitigation(void) 1350 1206 { 1351 - if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) { 1207 + if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) { 1352 1208 retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1353 1209 return; 1354 1210 } ··· 1385 1241 if (retbleed_mitigation != RETBLEED_MITIGATION_AUTO) 1386 1242 return; 1387 1243 1244 + if (!should_mitigate_vuln(X86_BUG_RETBLEED)) { 1245 + retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1246 + return; 1247 + } 1248 + 1388 1249 /* Intel mitigation selected in retbleed_update_mitigation() */ 1389 1250 if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 1390 1251 boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { ··· 1400 1251 retbleed_mitigation = RETBLEED_MITIGATION_IBPB; 1401 1252 else 1402 1253 retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1254 + } else if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { 1255 + /* Final mitigation depends on spectre-v2 selection */ 1256 + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) 1257 + retbleed_mitigation = RETBLEED_MITIGATION_EIBRS; 1258 + else if (boot_cpu_has(X86_FEATURE_IBRS)) 1259 + retbleed_mitigation = RETBLEED_MITIGATION_IBRS; 1260 + else 1261 + retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1403 1262 } 1404 1263 } 1405 1264 1406 1265 static void __init retbleed_update_mitigation(void) 1407 1266 { 1408 - if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) 1267 + if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) 1409 1268 return; 1410 1269 1411 - if (retbleed_mitigation == RETBLEED_MITIGATION_NONE) 1412 - goto out; 1270 + /* ITS can also enable stuffing */ 1271 + if (its_mitigation == ITS_MITIGATION_RETPOLINE_STUFF) 1272 + retbleed_mitigation = RETBLEED_MITIGATION_STUFF; 1413 1273 1414 - /* 1415 - * retbleed=stuff is only allowed on Intel. If stuffing can't be used 1416 - * then a different mitigation will be selected below. 1417 - * 1418 - * its=stuff will also attempt to enable stuffing. 1419 - */ 1420 - if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF || 1421 - its_mitigation == ITS_MITIGATION_RETPOLINE_STUFF) { 1422 - if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) { 1423 - pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n"); 1424 - retbleed_mitigation = RETBLEED_MITIGATION_AUTO; 1425 - } else { 1426 - if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF) 1427 - pr_info("Retbleed mitigation updated to stuffing\n"); 1274 + /* If SRSO is using IBPB, that works for retbleed too */ 1275 + if (srso_mitigation == SRSO_MITIGATION_IBPB) 1276 + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; 1428 1277 1429 - retbleed_mitigation = RETBLEED_MITIGATION_STUFF; 1430 - } 1278 + if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF && 1279 + !cdt_possible(spectre_v2_enabled)) { 1280 + pr_err("WARNING: retbleed=stuff depends on retpoline\n"); 1281 + retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1431 1282 } 1283 + 1432 1284 /* 1433 1285 * Let IBRS trump all on Intel without affecting the effects of the 1434 1286 * retbleed= cmdline option except for call depth based stuffing ··· 1448 1298 if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF) 1449 1299 pr_err(RETBLEED_INTEL_MSG); 1450 1300 } 1451 - /* If nothing has set the mitigation yet, default to NONE. */ 1452 - if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) 1453 - retbleed_mitigation = RETBLEED_MITIGATION_NONE; 1454 1301 } 1455 - out: 1302 + 1456 1303 pr_info("%s\n", retbleed_strings[retbleed_mitigation]); 1457 1304 } 1458 - 1459 1305 1460 1306 static void __init retbleed_apply_mitigation(void) 1461 1307 { ··· 1508 1362 } 1509 1363 1510 1364 if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) && 1511 - (retbleed_nosmt || cpu_mitigations_auto_nosmt())) 1365 + (retbleed_nosmt || smt_mitigations == SMT_MITIGATIONS_ON)) 1512 1366 cpu_smt_disable(false); 1513 1367 } 1514 1368 ··· 1553 1407 1554 1408 static void __init its_select_mitigation(void) 1555 1409 { 1556 - if (!boot_cpu_has_bug(X86_BUG_ITS) || cpu_mitigations_off()) { 1410 + if (!boot_cpu_has_bug(X86_BUG_ITS)) { 1557 1411 its_mitigation = ITS_MITIGATION_OFF; 1558 1412 return; 1559 1413 } 1560 1414 1561 - if (its_mitigation == ITS_MITIGATION_AUTO) 1562 - its_mitigation = ITS_MITIGATION_ALIGNED_THUNKS; 1415 + if (its_mitigation == ITS_MITIGATION_AUTO) { 1416 + if (should_mitigate_vuln(X86_BUG_ITS)) 1417 + its_mitigation = ITS_MITIGATION_ALIGNED_THUNKS; 1418 + else 1419 + its_mitigation = ITS_MITIGATION_OFF; 1420 + } 1563 1421 1564 1422 if (its_mitigation == ITS_MITIGATION_OFF) 1565 1423 return; ··· 1594 1444 1595 1445 static void __init its_update_mitigation(void) 1596 1446 { 1597 - if (!boot_cpu_has_bug(X86_BUG_ITS) || cpu_mitigations_off()) 1447 + if (!boot_cpu_has_bug(X86_BUG_ITS)) 1598 1448 return; 1599 1449 1600 1450 switch (spectre_v2_enabled) { 1601 1451 case SPECTRE_V2_NONE: 1602 - pr_err("WARNING: Spectre-v2 mitigation is off, disabling ITS\n"); 1452 + if (its_mitigation != ITS_MITIGATION_OFF) 1453 + pr_err("WARNING: Spectre-v2 mitigation is off, disabling ITS\n"); 1603 1454 its_mitigation = ITS_MITIGATION_OFF; 1604 1455 break; 1605 1456 case SPECTRE_V2_RETPOLINE: 1457 + case SPECTRE_V2_EIBRS_RETPOLINE: 1606 1458 /* Retpoline+CDT mitigates ITS */ 1607 1459 if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) 1608 1460 its_mitigation = ITS_MITIGATION_RETPOLINE_STUFF; ··· 1618 1466 break; 1619 1467 } 1620 1468 1621 - /* 1622 - * retbleed_update_mitigation() will try to do stuffing if its=stuff. 1623 - * If it can't, such as if spectre_v2!=retpoline, then fall back to 1624 - * aligned thunks. 1625 - */ 1626 1469 if (its_mitigation == ITS_MITIGATION_RETPOLINE_STUFF && 1627 - retbleed_mitigation != RETBLEED_MITIGATION_STUFF) 1470 + !cdt_possible(spectre_v2_enabled)) 1628 1471 its_mitigation = ITS_MITIGATION_ALIGNED_THUNKS; 1629 1472 1630 1473 pr_info("%s\n", its_strings[its_mitigation]); ··· 1627 1480 1628 1481 static void __init its_apply_mitigation(void) 1629 1482 { 1630 - /* its=stuff forces retbleed stuffing and is enabled there. */ 1631 - if (its_mitigation != ITS_MITIGATION_ALIGNED_THUNKS) 1632 - return; 1483 + switch (its_mitigation) { 1484 + case ITS_MITIGATION_OFF: 1485 + case ITS_MITIGATION_AUTO: 1486 + case ITS_MITIGATION_VMEXIT_ONLY: 1487 + break; 1488 + case ITS_MITIGATION_ALIGNED_THUNKS: 1489 + if (!boot_cpu_has(X86_FEATURE_RETPOLINE)) 1490 + setup_force_cpu_cap(X86_FEATURE_INDIRECT_THUNK_ITS); 1633 1491 1634 - if (!boot_cpu_has(X86_FEATURE_RETPOLINE)) 1635 - setup_force_cpu_cap(X86_FEATURE_INDIRECT_THUNK_ITS); 1636 - 1637 - setup_force_cpu_cap(X86_FEATURE_RETHUNK); 1638 - set_return_thunk(its_return_thunk); 1492 + setup_force_cpu_cap(X86_FEATURE_RETHUNK); 1493 + set_return_thunk(its_return_thunk); 1494 + break; 1495 + case ITS_MITIGATION_RETPOLINE_STUFF: 1496 + setup_force_cpu_cap(X86_FEATURE_RETHUNK); 1497 + setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH); 1498 + set_return_thunk(call_depth_return_thunk); 1499 + break; 1500 + } 1639 1501 } 1640 1502 1641 1503 #undef pr_fmt ··· 1692 1536 1693 1537 static void __init tsa_select_mitigation(void) 1694 1538 { 1695 - if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) { 1539 + if (!boot_cpu_has_bug(X86_BUG_TSA)) { 1696 1540 tsa_mitigation = TSA_MITIGATION_NONE; 1697 1541 return; 1542 + } 1543 + 1544 + if (tsa_mitigation == TSA_MITIGATION_AUTO) { 1545 + bool vm = false, uk = false; 1546 + 1547 + tsa_mitigation = TSA_MITIGATION_NONE; 1548 + 1549 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL) || 1550 + cpu_attack_vector_mitigated(CPU_MITIGATE_USER_USER)) { 1551 + tsa_mitigation = TSA_MITIGATION_USER_KERNEL; 1552 + uk = true; 1553 + } 1554 + 1555 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST) || 1556 + cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_GUEST)) { 1557 + tsa_mitigation = TSA_MITIGATION_VM; 1558 + vm = true; 1559 + } 1560 + 1561 + if (uk && vm) 1562 + tsa_mitigation = TSA_MITIGATION_FULL; 1698 1563 } 1699 1564 1700 1565 if (tsa_mitigation == TSA_MITIGATION_NONE) 1701 1566 return; 1702 1567 1703 - if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR)) { 1568 + if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR)) 1704 1569 tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED; 1705 - goto out; 1706 - } 1707 - 1708 - if (tsa_mitigation == TSA_MITIGATION_AUTO) 1709 - tsa_mitigation = TSA_MITIGATION_FULL; 1710 1570 1711 1571 /* 1712 1572 * No need to set verw_clear_cpu_buf_mitigation_selected - it 1713 1573 * doesn't fit all cases here and it is not needed because this 1714 1574 * is the only VERW-based mitigation on AMD. 1715 1575 */ 1716 - out: 1717 1576 pr_info("%s\n", tsa_strings[tsa_mitigation]); 1718 1577 } 1719 1578 ··· 1872 1701 char arg[20]; 1873 1702 int ret, i; 1874 1703 1875 - if (cpu_mitigations_off() || !IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2)) 1704 + if (!IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2)) 1876 1705 return SPECTRE_V2_USER_CMD_NONE; 1877 1706 1878 1707 ret = cmdline_find_option(boot_command_line, "spectre_v2_user", ··· 1910 1739 spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT; 1911 1740 break; 1912 1741 case SPECTRE_V2_USER_CMD_AUTO: 1742 + if (!should_mitigate_vuln(X86_BUG_SPECTRE_V2_USER)) 1743 + break; 1744 + spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL; 1745 + if (smt_mitigations == SMT_MITIGATIONS_OFF) 1746 + break; 1747 + spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL; 1748 + break; 1913 1749 case SPECTRE_V2_USER_CMD_PRCTL: 1914 1750 spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL; 1915 1751 spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL; ··· 2068 1890 int ret, i; 2069 1891 2070 1892 cmd = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ? SPECTRE_V2_CMD_AUTO : SPECTRE_V2_CMD_NONE; 2071 - if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") || 2072 - cpu_mitigations_off()) 1893 + if (cmdline_find_option_bool(boot_command_line, "nospectre_v2")) 2073 1894 return SPECTRE_V2_CMD_NONE; 2074 1895 2075 1896 ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg)); ··· 2271 2094 2272 2095 static void __init bhi_select_mitigation(void) 2273 2096 { 2274 - if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off()) 2097 + if (!boot_cpu_has(X86_BUG_BHI)) 2275 2098 bhi_mitigation = BHI_MITIGATION_OFF; 2276 2099 2277 - if (bhi_mitigation == BHI_MITIGATION_AUTO) 2278 - bhi_mitigation = BHI_MITIGATION_ON; 2100 + if (bhi_mitigation != BHI_MITIGATION_AUTO) 2101 + return; 2102 + 2103 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_GUEST_HOST)) { 2104 + if (cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL)) 2105 + bhi_mitigation = BHI_MITIGATION_ON; 2106 + else 2107 + bhi_mitigation = BHI_MITIGATION_VMEXIT_ONLY; 2108 + } else { 2109 + bhi_mitigation = BHI_MITIGATION_OFF; 2110 + } 2279 2111 } 2280 2112 2281 2113 static void __init bhi_update_mitigation(void) ··· 2340 2154 case SPECTRE_V2_CMD_NONE: 2341 2155 return; 2342 2156 2343 - case SPECTRE_V2_CMD_FORCE: 2344 2157 case SPECTRE_V2_CMD_AUTO: 2158 + if (!should_mitigate_vuln(X86_BUG_SPECTRE_V2)) 2159 + break; 2160 + fallthrough; 2161 + case SPECTRE_V2_CMD_FORCE: 2345 2162 if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { 2346 2163 spectre_v2_enabled = SPECTRE_V2_EIBRS; 2347 2164 break; ··· 2398 2209 } 2399 2210 } 2400 2211 2401 - if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) && !cpu_mitigations_off()) 2212 + if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 2402 2213 pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]); 2403 2214 } 2404 2215 ··· 3050 2861 3051 2862 static void __init l1tf_select_mitigation(void) 3052 2863 { 3053 - if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) { 2864 + if (!boot_cpu_has_bug(X86_BUG_L1TF)) { 3054 2865 l1tf_mitigation = L1TF_MITIGATION_OFF; 3055 2866 return; 3056 2867 } 3057 2868 3058 - if (l1tf_mitigation == L1TF_MITIGATION_AUTO) { 3059 - if (cpu_mitigations_auto_nosmt()) 3060 - l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT; 3061 - else 3062 - l1tf_mitigation = L1TF_MITIGATION_FLUSH; 2869 + if (l1tf_mitigation != L1TF_MITIGATION_AUTO) 2870 + return; 2871 + 2872 + if (!should_mitigate_vuln(X86_BUG_L1TF)) { 2873 + l1tf_mitigation = L1TF_MITIGATION_OFF; 2874 + return; 3063 2875 } 2876 + 2877 + if (smt_mitigations == SMT_MITIGATIONS_ON) 2878 + l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT; 2879 + else 2880 + l1tf_mitigation = L1TF_MITIGATION_FLUSH; 3064 2881 } 3065 2882 3066 2883 static void __init l1tf_apply_mitigation(void) ··· 3140 2945 #undef pr_fmt 3141 2946 #define pr_fmt(fmt) "Speculative Return Stack Overflow: " fmt 3142 2947 3143 - enum srso_mitigation { 3144 - SRSO_MITIGATION_NONE, 3145 - SRSO_MITIGATION_AUTO, 3146 - SRSO_MITIGATION_UCODE_NEEDED, 3147 - SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED, 3148 - SRSO_MITIGATION_MICROCODE, 3149 - SRSO_MITIGATION_SAFE_RET, 3150 - SRSO_MITIGATION_IBPB, 3151 - SRSO_MITIGATION_IBPB_ON_VMEXIT, 3152 - SRSO_MITIGATION_BP_SPEC_REDUCE, 3153 - }; 3154 - 3155 2948 static const char * const srso_strings[] = { 3156 2949 [SRSO_MITIGATION_NONE] = "Vulnerable", 3157 2950 [SRSO_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode", 3158 2951 [SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED] = "Vulnerable: Safe RET, no microcode", 3159 2952 [SRSO_MITIGATION_MICROCODE] = "Vulnerable: Microcode, no safe RET", 2953 + [SRSO_MITIGATION_NOSMT] = "Mitigation: SMT disabled", 3160 2954 [SRSO_MITIGATION_SAFE_RET] = "Mitigation: Safe RET", 3161 2955 [SRSO_MITIGATION_IBPB] = "Mitigation: IBPB", 3162 2956 [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only", 3163 2957 [SRSO_MITIGATION_BP_SPEC_REDUCE] = "Mitigation: Reduced Speculation" 3164 2958 }; 3165 - 3166 - static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO; 3167 2959 3168 2960 static int __init srso_parse_cmdline(char *str) 3169 2961 { ··· 3178 2996 3179 2997 static void __init srso_select_mitigation(void) 3180 2998 { 3181 - bool has_microcode; 3182 - 3183 - if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off()) 2999 + if (!boot_cpu_has_bug(X86_BUG_SRSO)) { 3184 3000 srso_mitigation = SRSO_MITIGATION_NONE; 3185 - 3186 - if (srso_mitigation == SRSO_MITIGATION_NONE) 3187 3001 return; 3002 + } 3188 3003 3189 - if (srso_mitigation == SRSO_MITIGATION_AUTO) 3190 - srso_mitigation = SRSO_MITIGATION_SAFE_RET; 3191 - 3192 - has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE); 3193 - if (has_microcode) { 3194 - /* 3195 - * Zen1/2 with SMT off aren't vulnerable after the right 3196 - * IBPB microcode has been applied. 3197 - */ 3198 - if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) { 3199 - setup_force_cpu_cap(X86_FEATURE_SRSO_NO); 3004 + if (srso_mitigation == SRSO_MITIGATION_AUTO) { 3005 + if (should_mitigate_vuln(X86_BUG_SRSO)) { 3006 + srso_mitigation = SRSO_MITIGATION_SAFE_RET; 3007 + } else { 3200 3008 srso_mitigation = SRSO_MITIGATION_NONE; 3201 3009 return; 3202 3010 } 3203 - } else { 3011 + } 3012 + 3013 + /* Zen1/2 with SMT off aren't vulnerable to SRSO. */ 3014 + if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) { 3015 + srso_mitigation = SRSO_MITIGATION_NOSMT; 3016 + return; 3017 + } 3018 + 3019 + if (!boot_cpu_has(X86_FEATURE_IBPB_BRTYPE)) { 3204 3020 pr_warn("IBPB-extending microcode not applied!\n"); 3205 3021 pr_warn(SRSO_NOTICE); 3022 + 3023 + /* 3024 + * Safe-RET provides partial mitigation without microcode, but 3025 + * other mitigations require microcode to provide any 3026 + * mitigations. 3027 + */ 3028 + if (srso_mitigation == SRSO_MITIGATION_SAFE_RET) 3029 + srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED; 3030 + else 3031 + srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED; 3206 3032 } 3207 3033 3208 3034 switch (srso_mitigation) { 3209 3035 case SRSO_MITIGATION_SAFE_RET: 3036 + case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED: 3210 3037 if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO)) { 3211 3038 srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT; 3212 3039 goto ibpb_on_vmexit; ··· 3225 3034 pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n"); 3226 3035 srso_mitigation = SRSO_MITIGATION_NONE; 3227 3036 } 3228 - 3229 - if (!has_microcode) 3230 - srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED; 3231 3037 break; 3232 3038 ibpb_on_vmexit: 3233 3039 case SRSO_MITIGATION_IBPB_ON_VMEXIT: ··· 3239 3051 pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n"); 3240 3052 srso_mitigation = SRSO_MITIGATION_NONE; 3241 3053 } 3242 - 3243 - if (!has_microcode) 3244 - srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED; 3245 3054 break; 3246 3055 default: 3247 3056 break; ··· 3253 3068 srso_mitigation = SRSO_MITIGATION_IBPB; 3254 3069 3255 3070 if (boot_cpu_has_bug(X86_BUG_SRSO) && 3256 - !cpu_mitigations_off() && 3257 - !boot_cpu_has(X86_FEATURE_SRSO_NO)) 3071 + !cpu_mitigations_off()) 3258 3072 pr_info("%s\n", srso_strings[srso_mitigation]); 3259 3073 } 3260 3074 ··· 3549 3365 3550 3366 static ssize_t srso_show_state(char *buf) 3551 3367 { 3552 - if (boot_cpu_has(X86_FEATURE_SRSO_NO)) 3553 - return sysfs_emit(buf, "Mitigation: SMT disabled\n"); 3554 - 3555 3368 return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]); 3556 3369 } 3557 3370
+3 -1
arch/x86/mm/pti.c
··· 38 38 #include <asm/desc.h> 39 39 #include <asm/sections.h> 40 40 #include <asm/set_memory.h> 41 + #include <asm/bugs.h> 41 42 42 43 #undef pr_fmt 43 44 #define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt ··· 85 84 return; 86 85 } 87 86 88 - if (cpu_mitigations_off()) 87 + if (pti_mode == PTI_AUTO && 88 + !cpu_attack_vector_mitigated(CPU_MITIGATE_USER_KERNEL)) 89 89 pti_mode = PTI_FORCE_OFF; 90 90 if (pti_mode == PTI_FORCE_OFF) { 91 91 pti_print_if_insecure("disabled on command line.");
+21
include/linux/cpu.h
··· 198 198 static inline void cpuhp_report_idle_dead(void) { } 199 199 #endif /* #ifdef CONFIG_HOTPLUG_CPU */ 200 200 201 + enum cpu_attack_vectors { 202 + CPU_MITIGATE_USER_KERNEL, 203 + CPU_MITIGATE_USER_USER, 204 + CPU_MITIGATE_GUEST_HOST, 205 + CPU_MITIGATE_GUEST_GUEST, 206 + NR_CPU_ATTACK_VECTORS, 207 + }; 208 + 209 + enum smt_mitigations { 210 + SMT_MITIGATIONS_OFF, 211 + SMT_MITIGATIONS_AUTO, 212 + SMT_MITIGATIONS_ON, 213 + }; 214 + 201 215 #ifdef CONFIG_CPU_MITIGATIONS 202 216 extern bool cpu_mitigations_off(void); 203 217 extern bool cpu_mitigations_auto_nosmt(void); 218 + extern bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v); 219 + extern enum smt_mitigations smt_mitigations; 204 220 #else 205 221 static inline bool cpu_mitigations_off(void) 206 222 { ··· 226 210 { 227 211 return false; 228 212 } 213 + static inline bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v) 214 + { 215 + return false; 216 + } 217 + #define smt_mitigations SMT_MITIGATIONS_OFF 229 218 #endif 230 219 231 220 #endif /* _LINUX_CPU_H_ */
+119 -11
kernel/cpu.c
··· 37 37 #include <linux/cpuset.h> 38 38 #include <linux/random.h> 39 39 #include <linux/cc_platform.h> 40 + #include <linux/parser.h> 40 41 41 42 #include <trace/events/power.h> 42 43 #define CREATE_TRACE_POINTS ··· 3175 3174 3176 3175 #ifdef CONFIG_CPU_MITIGATIONS 3177 3176 /* 3178 - * These are used for a global "mitigations=" cmdline option for toggling 3179 - * optional CPU mitigations. 3177 + * All except the cross-thread attack vector are mitigated by default. 3178 + * Cross-thread mitigation often requires disabling SMT which is expensive 3179 + * so cross-thread mitigations are only partially enabled by default. 3180 + * 3181 + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is 3182 + * present. 3183 + */ 3184 + static bool attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = { 3185 + [CPU_MITIGATE_USER_KERNEL] = true, 3186 + [CPU_MITIGATE_USER_USER] = true, 3187 + [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM), 3188 + [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM), 3189 + }; 3190 + 3191 + bool cpu_attack_vector_mitigated(enum cpu_attack_vectors v) 3192 + { 3193 + if (v < NR_CPU_ATTACK_VECTORS) 3194 + return attack_vectors[v]; 3195 + 3196 + WARN_ONCE(1, "Invalid attack vector %d\n", v); 3197 + return false; 3198 + } 3199 + 3200 + /* 3201 + * There are 3 global options, 'off', 'auto', 'auto,nosmt'. These may optionally 3202 + * be combined with attack-vector disables which follow them. 3203 + * 3204 + * Examples: 3205 + * mitigations=auto,no_user_kernel,no_user_user,no_cross_thread 3206 + * mitigations=auto,nosmt,no_guest_host,no_guest_guest 3207 + * 3208 + * mitigations=off is equivalent to disabling all attack vectors. 3180 3209 */ 3181 3210 enum cpu_mitigations { 3182 3211 CPU_MITIGATIONS_OFF, ··· 3214 3183 CPU_MITIGATIONS_AUTO_NOSMT, 3215 3184 }; 3216 3185 3186 + enum { 3187 + NO_USER_KERNEL, 3188 + NO_USER_USER, 3189 + NO_GUEST_HOST, 3190 + NO_GUEST_GUEST, 3191 + NO_CROSS_THREAD, 3192 + NR_VECTOR_PARAMS, 3193 + }; 3194 + 3195 + enum smt_mitigations smt_mitigations __ro_after_init = SMT_MITIGATIONS_AUTO; 3217 3196 static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO; 3197 + 3198 + static const match_table_t global_mitigations = { 3199 + { CPU_MITIGATIONS_AUTO_NOSMT, "auto,nosmt"}, 3200 + { CPU_MITIGATIONS_AUTO, "auto"}, 3201 + { CPU_MITIGATIONS_OFF, "off"}, 3202 + }; 3203 + 3204 + static const match_table_t vector_mitigations = { 3205 + { NO_USER_KERNEL, "no_user_kernel"}, 3206 + { NO_USER_USER, "no_user_user"}, 3207 + { NO_GUEST_HOST, "no_guest_host"}, 3208 + { NO_GUEST_GUEST, "no_guest_guest"}, 3209 + { NO_CROSS_THREAD, "no_cross_thread"}, 3210 + { NR_VECTOR_PARAMS, NULL}, 3211 + }; 3212 + 3213 + static int __init mitigations_parse_global_opt(char *arg) 3214 + { 3215 + int i; 3216 + 3217 + for (i = 0; i < ARRAY_SIZE(global_mitigations); i++) { 3218 + const char *pattern = global_mitigations[i].pattern; 3219 + 3220 + if (!strncmp(arg, pattern, strlen(pattern))) { 3221 + cpu_mitigations = global_mitigations[i].token; 3222 + return strlen(pattern); 3223 + } 3224 + } 3225 + 3226 + return 0; 3227 + } 3218 3228 3219 3229 static int __init mitigations_parse_cmdline(char *arg) 3220 3230 { 3221 - if (!strcmp(arg, "off")) 3222 - cpu_mitigations = CPU_MITIGATIONS_OFF; 3223 - else if (!strcmp(arg, "auto")) 3224 - cpu_mitigations = CPU_MITIGATIONS_AUTO; 3225 - else if (!strcmp(arg, "auto,nosmt")) 3226 - cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT; 3227 - else 3228 - pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n", 3229 - arg); 3231 + char *s, *p; 3232 + int len; 3233 + 3234 + len = mitigations_parse_global_opt(arg); 3235 + 3236 + if (cpu_mitigations_off()) { 3237 + memset(attack_vectors, 0, sizeof(attack_vectors)); 3238 + smt_mitigations = SMT_MITIGATIONS_OFF; 3239 + } else if (cpu_mitigations_auto_nosmt()) { 3240 + smt_mitigations = SMT_MITIGATIONS_ON; 3241 + } 3242 + 3243 + p = arg + len; 3244 + 3245 + if (!*p) 3246 + return 0; 3247 + 3248 + /* Attack vector controls may come after the ',' */ 3249 + if (*p++ != ',' || !IS_ENABLED(CONFIG_ARCH_HAS_CPU_ATTACK_VECTORS)) { 3250 + pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n", arg); 3251 + return 0; 3252 + } 3253 + 3254 + while ((s = strsep(&p, ",")) != NULL) { 3255 + switch (match_token(s, vector_mitigations, NULL)) { 3256 + case NO_USER_KERNEL: 3257 + attack_vectors[CPU_MITIGATE_USER_KERNEL] = false; 3258 + break; 3259 + case NO_USER_USER: 3260 + attack_vectors[CPU_MITIGATE_USER_USER] = false; 3261 + break; 3262 + case NO_GUEST_HOST: 3263 + attack_vectors[CPU_MITIGATE_GUEST_HOST] = false; 3264 + break; 3265 + case NO_GUEST_GUEST: 3266 + attack_vectors[CPU_MITIGATE_GUEST_GUEST] = false; 3267 + break; 3268 + case NO_CROSS_THREAD: 3269 + smt_mitigations = SMT_MITIGATIONS_OFF; 3270 + break; 3271 + default: 3272 + pr_crit("Unsupported mitigations options %s\n", s); 3273 + return 0; 3274 + } 3275 + } 3230 3276 3231 3277 return 0; 3232 3278 }