Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

fs/binfmt_elf: use PT_LOAD p_align values for suitable start address

Patch series "Selecting Load Addresses According to p_align", v3.

The current ELF loading mechancism provides page-aligned mappings. This
can lead to the program being loaded in a way unsuitable for file-backed,
transparent huge pages when handling PIE executables.

While specifying -z,max-page-size=0x200000 to the linker will generate
suitably aligned segments for huge pages on x86_64, the executable needs
to be loaded at a suitably aligned address as well. This alignment
requires the binary's cooperation, as distinct segments need to be
appropriately paddded to be eligible for THP.

For binaries built with increased alignment, this limits the number of
bits usable for ASLR, but provides some randomization over using fixed
load addresses/non-PIE binaries.

This patch (of 2):

The current ELF loading mechancism provides page-aligned mappings. This
can lead to the program being loaded in a way unsuitable for file-backed,
transparent huge pages when handling PIE executables.

For binaries built with increased alignment, this limits the number of
bits usable for ASLR, but provides some randomization over using fixed
load addresses/non-PIE binaries.

Tested by verifying program with -Wl,-z,max-page-size=0x200000 loading.

[akpm@linux-foundation.org: fix max() warning]
[ckennelly@google.com: augment comment]
Link: https://lkml.kernel.org/r/20200821233848.3904680-2-ckennelly@google.com

Signed-off-by: Chris Kennelly <ckennelly@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Hugh Dickens <hughd@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Fangrui Song <maskray@google.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Shuah Khan <shuah@kernel.org>
Link: https://lkml.kernel.org/r/20200820170541.1132271-1-ckennelly@google.com
Link: https://lkml.kernel.org/r/20200820170541.1132271-2-ckennelly@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Chris Kennelly and committed by
Linus Torvalds
ce81bb25 48ca2d8a

+25
+25
fs/binfmt_elf.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/kernel.h> 15 15 #include <linux/fs.h> 16 + #include <linux/log2.h> 16 17 #include <linux/mm.h> 17 18 #include <linux/mman.h> 18 19 #include <linux/errno.h> ··· 420 419 return (rv < 0) ? rv : -EIO; 421 420 } 422 421 return 0; 422 + } 423 + 424 + static unsigned long maximum_alignment(struct elf_phdr *cmds, int nr) 425 + { 426 + unsigned long alignment = 0; 427 + int i; 428 + 429 + for (i = 0; i < nr; i++) { 430 + if (cmds[i].p_type == PT_LOAD) { 431 + unsigned long p_align = cmds[i].p_align; 432 + 433 + /* skip non-power of two alignments as invalid */ 434 + if (!is_power_of_2(p_align)) 435 + continue; 436 + alignment = max(alignment, p_align); 437 + } 438 + } 439 + 440 + /* ensure we align to at least one page */ 441 + return ELF_PAGEALIGN(alignment); 423 442 } 424 443 425 444 /** ··· 1029 1008 int elf_prot, elf_flags; 1030 1009 unsigned long k, vaddr; 1031 1010 unsigned long total_size = 0; 1011 + unsigned long alignment; 1032 1012 1033 1013 if (elf_ppnt->p_type != PT_LOAD) 1034 1014 continue; ··· 1108 1086 load_bias = ELF_ET_DYN_BASE; 1109 1087 if (current->flags & PF_RANDOMIZE) 1110 1088 load_bias += arch_mmap_rnd(); 1089 + alignment = maximum_alignment(elf_phdata, elf_ex->e_phnum); 1090 + if (alignment) 1091 + load_bias &= ~(alignment - 1); 1111 1092 elf_flags |= MAP_FIXED; 1112 1093 } else 1113 1094 load_bias = 0;