arm64: efi: Fix handling of misaligned runtime regions and drop warning

Currently, when mapping the EFI runtime regions in the EFI page tables,
we complain about misaligned regions in a rather noisy way, using
WARN().

Not only does this produce a lot of irrelevant clutter in the log, it is
factually incorrect, as misaligned runtime regions are actually allowed
by the EFI spec as long as they don't require conflicting memory types
within the same 64k page.

So let's drop the warning, and tweak the code so that we
- take both the start and end of the region into account when checking
for misalignment
- only revert to RWX mappings for non-code regions if misaligned code
regions are also known to exist.

Cc: <stable@vger.kernel.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Changed files
+34 -18
arch
arm64
kernel
+34 -18
arch/arm64/kernel/efi.c
··· 13 13 14 14 #include <asm/efi.h> 15 15 16 + static bool region_is_misaligned(const efi_memory_desc_t *md) 17 + { 18 + if (PAGE_SIZE == EFI_PAGE_SIZE) 19 + return false; 20 + return !PAGE_ALIGNED(md->phys_addr) || 21 + !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT); 22 + } 23 + 16 24 /* 17 25 * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be 18 26 * executable, everything else can be mapped with the XN bits ··· 34 26 if (type == EFI_MEMORY_MAPPED_IO) 35 27 return PROT_DEVICE_nGnRE; 36 28 37 - if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr), 38 - "UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?")) 29 + if (region_is_misaligned(md)) { 30 + static bool __initdata code_is_misaligned; 31 + 39 32 /* 40 - * If the region is not aligned to the page size of the OS, we 41 - * can not use strict permissions, since that would also affect 42 - * the mapping attributes of the adjacent regions. 33 + * Regions that are not aligned to the OS page size cannot be 34 + * mapped with strict permissions, as those might interfere 35 + * with the permissions that are needed by the adjacent 36 + * region's mapping. However, if we haven't encountered any 37 + * misaligned runtime code regions so far, we can safely use 38 + * non-executable permissions for non-code regions. 43 39 */ 44 - return pgprot_val(PAGE_KERNEL_EXEC); 40 + code_is_misaligned |= (type == EFI_RUNTIME_SERVICES_CODE); 41 + 42 + return code_is_misaligned ? pgprot_val(PAGE_KERNEL_EXEC) 43 + : pgprot_val(PAGE_KERNEL); 44 + } 45 45 46 46 /* R-- */ 47 47 if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) == ··· 80 64 bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE || 81 65 md->type == EFI_RUNTIME_SERVICES_DATA); 82 66 83 - if (!PAGE_ALIGNED(md->phys_addr) || 84 - !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) { 85 - /* 86 - * If the end address of this region is not aligned to page 87 - * size, the mapping is rounded up, and may end up sharing a 88 - * page frame with the next UEFI memory region. If we create 89 - * a block entry now, we may need to split it again when mapping 90 - * the next region, and support for that is going to be removed 91 - * from the MMU routines. So avoid block mappings altogether in 92 - * that case. 93 - */ 67 + /* 68 + * If this region is not aligned to the page size used by the OS, the 69 + * mapping will be rounded outwards, and may end up sharing a page 70 + * frame with an adjacent runtime memory region. Given that the page 71 + * table descriptor covering the shared page will be rewritten when the 72 + * adjacent region gets mapped, we must avoid block mappings here so we 73 + * don't have to worry about splitting them when that happens. 74 + */ 75 + if (region_is_misaligned(md)) 94 76 page_mappings_only = true; 95 - } 96 77 97 78 create_pgd_mapping(mm, md->phys_addr, md->virt_addr, 98 79 md->num_pages << EFI_PAGE_SHIFT, ··· 115 102 { 116 103 BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE && 117 104 md->type != EFI_RUNTIME_SERVICES_DATA); 105 + 106 + if (region_is_misaligned(md)) 107 + return 0; 118 108 119 109 /* 120 110 * Calling apply_to_page_range() is only safe on regions that are