Merge tag 'x86-urgent-2020-03-15' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
"Two fixes for x86:

- Map EFI runtime service data as encrypted when SEV is enabled.

Otherwise e.g. SMBIOS data cannot be properly decoded by dmidecode.

- Remove the warning in the vector management code which triggered
when a managed interrupt affinity changed outside of a CPU hotplug
operation.

The warning was correct until the recent core code change that
introduced a CPU isolation feature which needs to migrate managed
interrupts away from online CPUs under certain conditions to
achieve the isolation"

* tag 'x86-urgent-2020-03-15' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/vector: Remove warning on managed interrupt migration
x86/ioremap: Map EFI runtime services data as encrypted for SEV

Changed files
+26 -6
arch
x86
kernel
apic
mm
+8 -6
arch/x86/kernel/apic/vector.c
··· 838 838 bool managed = apicd->is_managed; 839 839 840 840 /* 841 - * This should never happen. Managed interrupts are not 842 - * migrated except on CPU down, which does not involve the 843 - * cleanup vector. But try to keep the accounting correct 844 - * nevertheless. 841 + * Managed interrupts are usually not migrated away 842 + * from an online CPU, but CPU isolation 'managed_irq' 843 + * can make that happen. 844 + * 1) Activation does not take the isolation into account 845 + * to keep the code simple 846 + * 2) Migration away from an isolated CPU can happen when 847 + * a non-isolated CPU which is in the calculated 848 + * affinity mask comes online. 845 849 */ 846 - WARN_ON_ONCE(managed); 847 - 848 850 trace_vector_free_moved(apicd->irq, cpu, vector, managed); 849 851 irq_matrix_free(vector_matrix, cpu, vector, managed); 850 852 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
+18
arch/x86/mm/ioremap.c
··· 106 106 return 0; 107 107 } 108 108 109 + /* 110 + * The EFI runtime services data area is not covered by walk_mem_res(), but must 111 + * be mapped encrypted when SEV is active. 112 + */ 113 + static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc) 114 + { 115 + if (!sev_active()) 116 + return; 117 + 118 + if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA) 119 + desc->flags |= IORES_MAP_ENCRYPTED; 120 + } 121 + 109 122 static int __ioremap_collect_map_flags(struct resource *res, void *arg) 110 123 { 111 124 struct ioremap_desc *desc = arg; ··· 137 124 * To avoid multiple resource walks, this function walks resources marked as 138 125 * IORESOURCE_MEM and IORESOURCE_BUSY and looking for system RAM and/or a 139 126 * resource described not as IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES). 127 + * 128 + * After that, deal with misc other ranges in __ioremap_check_other() which do 129 + * not fall into the above category. 140 130 */ 141 131 static void __ioremap_check_mem(resource_size_t addr, unsigned long size, 142 132 struct ioremap_desc *desc) ··· 151 135 memset(desc, 0, sizeof(struct ioremap_desc)); 152 136 153 137 walk_mem_res(start, end, desc, __ioremap_collect_map_flags); 138 + 139 + __ioremap_check_other(addr, desc); 154 140 } 155 141 156 142 /*