Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/mm: Make DMA memory shared for TD guest

Intel TDX doesn't allow VMM to directly access guest private memory.
Any memory that is required for communication with the VMM must be
shared explicitly. The same rule applies for any DMA to and from the
TDX guest. All DMA pages have to be marked as shared pages. A generic way
to achieve this without any changes to device drivers is to use the
SWIOTLB framework.

The previous patch ("Add support for TDX shared memory") gave TDX guests
the _ability_ to make some pages shared, but did not make any pages
shared. This actually marks SWIOTLB buffers *as* shared.

Start returning true for cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) in
TDX guests. This has several implications:

- Allows the existing mem_encrypt_init() to be used for TDX which
sets SWIOTLB buffers shared (aka. "decrypted").
- Ensures that all DMA is routed via the SWIOTLB mechanism (see
pci_swiotlb_detect())

Stop selecting DYNAMIC_PHYSICAL_MASK directly. It will get set
indirectly by selecting X86_MEM_ENCRYPT.

mem_encrypt_init() is currently under an AMD-specific #ifdef. Move it to
a generic area of the header.

Co-developed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20220405232939.73860-28-kirill.shutemov@linux.intel.com

authored by

Kirill A. Shutemov and committed by
Dave Hansen
968b4931 7dbde763

+13 -5
+1 -1
arch/x86/Kconfig
··· 883 883 depends on X86_64 && CPU_SUP_INTEL 884 884 depends on X86_X2APIC 885 885 select ARCH_HAS_CC_PLATFORM 886 - select DYNAMIC_PHYSICAL_MASK 886 + select X86_MEM_ENCRYPT 887 887 select X86_MCE 888 888 help 889 889 Support running as a guest under Intel TDX. Without this support,
+1
arch/x86/coco/core.c
··· 22 22 case CC_ATTR_GUEST_UNROLL_STRING_IO: 23 23 case CC_ATTR_HOTPLUG_DISABLED: 24 24 case CC_ATTR_GUEST_MEM_ENCRYPT: 25 + case CC_ATTR_MEM_ENCRYPT: 25 26 return true; 26 27 default: 27 28 return false;
+3 -3
arch/x86/include/asm/mem_encrypt.h
··· 49 49 50 50 void __init mem_encrypt_free_decrypted_mem(void); 51 51 52 - /* Architecture __weak replacement functions */ 53 - void __init mem_encrypt_init(void); 54 - 55 52 void __init sev_es_init_vc_handling(void); 56 53 57 54 #define __bss_decrypted __section(".bss..decrypted") ··· 85 88 #define __bss_decrypted 86 89 87 90 #endif /* CONFIG_AMD_MEM_ENCRYPT */ 91 + 92 + /* Architecture __weak replacement functions */ 93 + void __init mem_encrypt_init(void); 88 94 89 95 /* 90 96 * The __sme_pa() and __sme_pa_nodebug() macros are meant for use when
+8 -1
arch/x86/mm/mem_encrypt.c
··· 42 42 43 43 static void print_mem_encrypt_feature_info(void) 44 44 { 45 - pr_info("AMD Memory Encryption Features active:"); 45 + pr_info("Memory Encryption Features active:"); 46 + 47 + if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { 48 + pr_cont(" Intel TDX\n"); 49 + return; 50 + } 51 + 52 + pr_cont("AMD "); 46 53 47 54 /* Secure Memory Encryption */ 48 55 if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {