Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

arm64: efi: Fix handling of misaligned runtime regions and drop warning

Currently, when mapping the EFI runtime regions in the EFI page tables,
we complain about misaligned regions in a rather noisy way, using
WARN().

Not only does this produce a lot of irrelevant clutter in the log, it is
factually incorrect, as misaligned runtime regions are actually allowed
by the EFI spec as long as they don't require conflicting memory types
within the same 64k page.

So let's drop the warning, and tweak the code so that we
- take both the start and end of the region into account when checking
for misalignment
- only revert to RWX mappings for non-code regions if misaligned code
regions are also known to exist.

Cc: <stable@vger.kernel.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

+34 -18
+34 -18
arch/arm64/kernel/efi.c
··· 13 14 #include <asm/efi.h> 15 16 /* 17 * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be 18 * executable, everything else can be mapped with the XN bits ··· 34 if (type == EFI_MEMORY_MAPPED_IO) 35 return PROT_DEVICE_nGnRE; 36 37 - if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr), 38 - "UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?")) 39 /* 40 - * If the region is not aligned to the page size of the OS, we 41 - * can not use strict permissions, since that would also affect 42 - * the mapping attributes of the adjacent regions. 43 */ 44 - return pgprot_val(PAGE_KERNEL_EXEC); 45 46 /* R-- */ 47 if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) == ··· 80 bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE || 81 md->type == EFI_RUNTIME_SERVICES_DATA); 82 83 - if (!PAGE_ALIGNED(md->phys_addr) || 84 - !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) { 85 - /* 86 - * If the end address of this region is not aligned to page 87 - * size, the mapping is rounded up, and may end up sharing a 88 - * page frame with the next UEFI memory region. If we create 89 - * a block entry now, we may need to split it again when mapping 90 - * the next region, and support for that is going to be removed 91 - * from the MMU routines. So avoid block mappings altogether in 92 - * that case. 93 - */ 94 page_mappings_only = true; 95 - } 96 97 create_pgd_mapping(mm, md->phys_addr, md->virt_addr, 98 md->num_pages << EFI_PAGE_SHIFT, ··· 115 { 116 BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE && 117 md->type != EFI_RUNTIME_SERVICES_DATA); 118 119 /* 120 * Calling apply_to_page_range() is only safe on regions that are
··· 13 14 #include <asm/efi.h> 15 16 + static bool region_is_misaligned(const efi_memory_desc_t *md) 17 + { 18 + if (PAGE_SIZE == EFI_PAGE_SIZE) 19 + return false; 20 + return !PAGE_ALIGNED(md->phys_addr) || 21 + !PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT); 22 + } 23 + 24 /* 25 * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be 26 * executable, everything else can be mapped with the XN bits ··· 26 if (type == EFI_MEMORY_MAPPED_IO) 27 return PROT_DEVICE_nGnRE; 28 29 + if (region_is_misaligned(md)) { 30 + static bool __initdata code_is_misaligned; 31 + 32 /* 33 + * Regions that are not aligned to the OS page size cannot be 34 + * mapped with strict permissions, as those might interfere 35 + * with the permissions that are needed by the adjacent 36 + * region's mapping. However, if we haven't encountered any 37 + * misaligned runtime code regions so far, we can safely use 38 + * non-executable permissions for non-code regions. 39 */ 40 + code_is_misaligned |= (type == EFI_RUNTIME_SERVICES_CODE); 41 + 42 + return code_is_misaligned ? pgprot_val(PAGE_KERNEL_EXEC) 43 + : pgprot_val(PAGE_KERNEL); 44 + } 45 46 /* R-- */ 47 if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) == ··· 64 bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE || 65 md->type == EFI_RUNTIME_SERVICES_DATA); 66 67 + /* 68 + * If this region is not aligned to the page size used by the OS, the 69 + * mapping will be rounded outwards, and may end up sharing a page 70 + * frame with an adjacent runtime memory region. Given that the page 71 + * table descriptor covering the shared page will be rewritten when the 72 + * adjacent region gets mapped, we must avoid block mappings here so we 73 + * don't have to worry about splitting them when that happens. 74 + */ 75 + if (region_is_misaligned(md)) 76 page_mappings_only = true; 77 78 create_pgd_mapping(mm, md->phys_addr, md->virt_addr, 79 md->num_pages << EFI_PAGE_SHIFT, ··· 102 { 103 BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE && 104 md->type != EFI_RUNTIME_SERVICES_DATA); 105 + 106 + if (region_is_misaligned(md)) 107 + return 0; 108 109 /* 110 * Calling apply_to_page_range() is only safe on regions that are