Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
fork

Configure Feed

Select the types of activity you want to include in your feed.

x86/sev: Add SEV-SNP guest feature negotiation support

The hypervisor can enable various new features (SEV_FEATURES[1:63]) and start a
SNP guest. Some of these features need guest side implementation. If any of
these features are enabled without it, the behavior of the SNP guest will be
undefined. It may fail booting in a non-obvious way making it difficult to
debug.

Instead of allowing the guest to continue and have it fail randomly later,
detect this early and fail gracefully.

The SEV_STATUS MSR indicates features which the hypervisor has enabled. While
booting, SNP guests should ascertain that all the enabled features have guest
side implementation. In case a feature is not implemented in the guest, the
guest terminates booting with GHCB protocol Non-Automatic Exit(NAE) termination
request event, see "SEV-ES Guest-Hypervisor Communication Block Standardization"
document (currently at https://developer.amd.com/wp-content/resources/56421.pdf),
section "Termination Request".

Populate SW_EXITINFO2 with mask of unsupported features that the hypervisor can
easily report to the user.

More details in the AMD64 APM Vol 2, Section "SEV_STATUS MSR".

[ bp:
- Massage.
- Move snp_check_features() call to C code.
Note: the CC:stable@ aspect here is to be able to protect older, stable
kernels when running on newer hypervisors. Or not "running" but fail
reliably and in a well-defined manner instead of randomly. ]

Fixes: cbd3d4f7c4e5 ("x86/sev: Check SEV-SNP features support")
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: <stable@kernel.org>
Link: https://lore.kernel.org/r/20230118061943.534309-1-nikunj@amd.com

authored by

Nikunj A Dadhania and committed by
Borislav Petkov (AMD)
8c29f016 6c796996

+140
+36
Documentation/x86/amd-memory-encryption.rst
··· 95 not enable SME, then Linux will not be able to activate memory encryption, even 96 if configured to do so by default or the mem_encrypt=on command line parameter 97 is specified.
··· 95 not enable SME, then Linux will not be able to activate memory encryption, even 96 if configured to do so by default or the mem_encrypt=on command line parameter 97 is specified. 98 + 99 + Secure Nested Paging (SNP) 100 + ========================== 101 + 102 + SEV-SNP introduces new features (SEV_FEATURES[1:63]) which can be enabled 103 + by the hypervisor for security enhancements. Some of these features need 104 + guest side implementation to function correctly. The below table lists the 105 + expected guest behavior with various possible scenarios of guest/hypervisor 106 + SNP feature support. 107 + 108 + +-----------------+---------------+---------------+------------------+ 109 + | Feature Enabled | Guest needs | Guest has | Guest boot | 110 + | by the HV | implementation| implementation| behaviour | 111 + +=================+===============+===============+==================+ 112 + | No | No | No | Boot | 113 + | | | | | 114 + +-----------------+---------------+---------------+------------------+ 115 + | No | Yes | No | Boot | 116 + | | | | | 117 + +-----------------+---------------+---------------+------------------+ 118 + | No | Yes | Yes | Boot | 119 + | | | | | 120 + +-----------------+---------------+---------------+------------------+ 121 + | Yes | No | No | Boot with | 122 + | | | | feature enabled | 123 + +-----------------+---------------+---------------+------------------+ 124 + | Yes | Yes | No | Graceful boot | 125 + | | | | failure | 126 + +-----------------+---------------+---------------+------------------+ 127 + | Yes | Yes | Yes | Boot with | 128 + | | | | feature enabled | 129 + +-----------------+---------------+---------------+------------------+ 130 + 131 + More details in AMD64 APM[1] Vol 2: 15.34.10 SEV_STATUS MSR 132 + 133 + [1] https://www.amd.com/system/files/TechDocs/40332.pdf
+6
arch/x86/boot/compressed/ident_map_64.c
··· 180 181 /* Load the new page-table. */ 182 write_cr3(top_level_pgt); 183 } 184 185 static pte_t *split_large_pmd(struct x86_mapping_info *info,
··· 180 181 /* Load the new page-table. */ 182 write_cr3(top_level_pgt); 183 + 184 + /* 185 + * Now that the required page table mappings are established and a 186 + * GHCB can be used, check for SNP guest/HV feature compatibility. 187 + */ 188 + snp_check_features(); 189 } 190 191 static pte_t *split_large_pmd(struct x86_mapping_info *info,
+2
arch/x86/boot/compressed/misc.h
··· 126 127 #ifdef CONFIG_AMD_MEM_ENCRYPT 128 void sev_enable(struct boot_params *bp); 129 void sev_es_shutdown_ghcb(void); 130 extern bool sev_es_check_ghcb_fault(unsigned long address); 131 void snp_set_page_private(unsigned long paddr); ··· 144 if (bp) 145 bp->cc_blob_address = 0; 146 } 147 static inline void sev_es_shutdown_ghcb(void) { } 148 static inline bool sev_es_check_ghcb_fault(unsigned long address) 149 {
··· 126 127 #ifdef CONFIG_AMD_MEM_ENCRYPT 128 void sev_enable(struct boot_params *bp); 129 + void snp_check_features(void); 130 void sev_es_shutdown_ghcb(void); 131 extern bool sev_es_check_ghcb_fault(unsigned long address); 132 void snp_set_page_private(unsigned long paddr); ··· 143 if (bp) 144 bp->cc_blob_address = 0; 145 } 146 + static inline void snp_check_features(void) { } 147 static inline void sev_es_shutdown_ghcb(void) { } 148 static inline bool sev_es_check_ghcb_fault(unsigned long address) 149 {
+70
arch/x86/boot/compressed/sev.c
··· 208 error("Can't unmap GHCB page"); 209 } 210 211 bool sev_es_check_ghcb_fault(unsigned long address) 212 { 213 /* Check whether the fault was on the GHCB page */ ··· 285 attrs = 1; 286 if (rmpadjust((unsigned long)&boot_ghcb_page, RMP_PG_SIZE_4K, attrs)) 287 sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NOT_VMPL0); 288 } 289 290 void sev_enable(struct boot_params *bp)
··· 208 error("Can't unmap GHCB page"); 209 } 210 211 + static void __noreturn sev_es_ghcb_terminate(struct ghcb *ghcb, unsigned int set, 212 + unsigned int reason, u64 exit_info_2) 213 + { 214 + u64 exit_info_1 = SVM_VMGEXIT_TERM_REASON(set, reason); 215 + 216 + vc_ghcb_invalidate(ghcb); 217 + ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_TERM_REQUEST); 218 + ghcb_set_sw_exit_info_1(ghcb, exit_info_1); 219 + ghcb_set_sw_exit_info_2(ghcb, exit_info_2); 220 + 221 + sev_es_wr_ghcb_msr(__pa(ghcb)); 222 + VMGEXIT(); 223 + 224 + while (true) 225 + asm volatile("hlt\n" : : : "memory"); 226 + } 227 + 228 bool sev_es_check_ghcb_fault(unsigned long address) 229 { 230 /* Check whether the fault was on the GHCB page */ ··· 268 attrs = 1; 269 if (rmpadjust((unsigned long)&boot_ghcb_page, RMP_PG_SIZE_4K, attrs)) 270 sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NOT_VMPL0); 271 + } 272 + 273 + /* 274 + * SNP_FEATURES_IMPL_REQ is the mask of SNP features that will need 275 + * guest side implementation for proper functioning of the guest. If any 276 + * of these features are enabled in the hypervisor but are lacking guest 277 + * side implementation, the behavior of the guest will be undefined. The 278 + * guest could fail in non-obvious way making it difficult to debug. 279 + * 280 + * As the behavior of reserved feature bits is unknown to be on the 281 + * safe side add them to the required features mask. 282 + */ 283 + #define SNP_FEATURES_IMPL_REQ (MSR_AMD64_SNP_VTOM | \ 284 + MSR_AMD64_SNP_REFLECT_VC | \ 285 + MSR_AMD64_SNP_RESTRICTED_INJ | \ 286 + MSR_AMD64_SNP_ALT_INJ | \ 287 + MSR_AMD64_SNP_DEBUG_SWAP | \ 288 + MSR_AMD64_SNP_VMPL_SSS | \ 289 + MSR_AMD64_SNP_SECURE_TSC | \ 290 + MSR_AMD64_SNP_VMGEXIT_PARAM | \ 291 + MSR_AMD64_SNP_VMSA_REG_PROTECTION | \ 292 + MSR_AMD64_SNP_RESERVED_BIT13 | \ 293 + MSR_AMD64_SNP_RESERVED_BIT15 | \ 294 + MSR_AMD64_SNP_RESERVED_MASK) 295 + 296 + /* 297 + * SNP_FEATURES_PRESENT is the mask of SNP features that are implemented 298 + * by the guest kernel. As and when a new feature is implemented in the 299 + * guest kernel, a corresponding bit should be added to the mask. 300 + */ 301 + #define SNP_FEATURES_PRESENT (0) 302 + 303 + void snp_check_features(void) 304 + { 305 + u64 unsupported; 306 + 307 + if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) 308 + return; 309 + 310 + /* 311 + * Terminate the boot if hypervisor has enabled any feature lacking 312 + * guest side implementation. Pass on the unsupported features mask through 313 + * EXIT_INFO_2 of the GHCB protocol so that those features can be reported 314 + * as part of the guest boot failure. 315 + */ 316 + unsupported = sev_status & SNP_FEATURES_IMPL_REQ & ~SNP_FEATURES_PRESENT; 317 + if (unsupported) { 318 + if (ghcb_version < 2 || (!boot_ghcb && !early_setup_ghcb())) 319 + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); 320 + 321 + sev_es_ghcb_terminate(boot_ghcb, SEV_TERM_SET_GEN, 322 + GHCB_SNP_UNSUPPORTED, unsupported); 323 + } 324 } 325 326 void sev_enable(struct boot_params *bp)
+20
arch/x86/include/asm/msr-index.h
··· 566 #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) 567 #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) 568 569 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f 570 571 /* AMD Collaborative Processor Performance Control MSRs */
··· 566 #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) 567 #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) 568 569 + /* SNP feature bits enabled by the hypervisor */ 570 + #define MSR_AMD64_SNP_VTOM BIT_ULL(3) 571 + #define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(4) 572 + #define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(5) 573 + #define MSR_AMD64_SNP_ALT_INJ BIT_ULL(6) 574 + #define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(7) 575 + #define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(8) 576 + #define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(9) 577 + #define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(10) 578 + #define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(11) 579 + #define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(12) 580 + #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14) 581 + #define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16) 582 + #define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17) 583 + 584 + /* SNP feature bits reserved for future use. */ 585 + #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13) 586 + #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15) 587 + #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18) 588 + 589 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f 590 591 /* AMD Collaborative Processor Performance Control MSRs */
+6
arch/x86/include/uapi/asm/svm.h
··· 116 #define SVM_VMGEXIT_AP_CREATE 1 117 #define SVM_VMGEXIT_AP_DESTROY 2 118 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 119 #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff 120 121 /* Exit code reserved for hypervisor/software use */
··· 116 #define SVM_VMGEXIT_AP_CREATE 1 117 #define SVM_VMGEXIT_AP_DESTROY 2 118 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 119 + #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe 120 + #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ 121 + /* SW_EXITINFO1[3:0] */ \ 122 + (((((u64)reason_set) & 0xf)) | \ 123 + /* SW_EXITINFO1[11:4] */ \ 124 + ((((u64)reason_code) & 0xff) << 4)) 125 #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff 126 127 /* Exit code reserved for hypervisor/software use */