Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

module: use relative references for __ksymtab entries

An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab entries,
each consisting of two 64-bit fields containing absolute references, to
the symbol itself and to a char array containing its name, respectively.

When we build the same configuration with KASLR enabled, we end up with an
additional ~192 KB of relocations in the .init section, i.e., one 24 byte
entry for each absolute reference, which all need to be processed at boot
time.

Given how the struct kernel_symbol that describes each entry is completely
local to module.c (except for the references emitted by EXPORT_SYMBOL()
itself), we can easily modify it to contain two 32-bit relative references
instead. This reduces the size of the __ksymtab section by 50% for all
64-bit architectures, and gets rid of the runtime relocations entirely for
architectures implementing KASLR, either via standard PIE linking (arm64)
or using custom host tools (x86).

Note that the binary search involving __ksymtab contents relies on each
section being sorted by symbol name. This is implemented based on the
input section names, not the names in the ksymtab entries, so this patch
does not interfere with that.

Given that the use of place-relative relocations requires support both in
the toolchain and in the module loader, we cannot enable this feature for
all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.

Link: http://lkml.kernel.org/r/20180704083651.24360-4-ard.biesheuvel@linaro.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morris <james.morris@microsoft.com>
Cc: James Morris <jmorris@namei.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Ard Biesheuvel and committed by
Linus Torvalds
7290d580 f922c4ab

+91 -24
+1
arch/x86/include/asm/Kbuild
··· 8 8 9 9 generic-y += dma-contiguous.h 10 10 generic-y += early_ioremap.h 11 + generic-y += export.h 11 12 generic-y += mcs_spinlock.h 12 13 generic-y += mm-arch-hooks.h
-5
arch/x86/include/asm/export.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifdef CONFIG_64BIT 3 - #define KSYM_ALIGN 16 4 - #endif 5 - #include <asm-generic/export.h>
+10 -2
include/asm-generic/export.h
··· 5 5 #define KSYM_FUNC(x) x 6 6 #endif 7 7 #ifdef CONFIG_64BIT 8 - #define __put .quad 9 8 #ifndef KSYM_ALIGN 10 9 #define KSYM_ALIGN 8 11 10 #endif 12 11 #else 13 - #define __put .long 14 12 #ifndef KSYM_ALIGN 15 13 #define KSYM_ALIGN 4 16 14 #endif ··· 16 18 #ifndef KCRC_ALIGN 17 19 #define KCRC_ALIGN 4 18 20 #endif 21 + 22 + .macro __put, val, name 23 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 24 + .long \val - ., \name - . 25 + #elif defined(CONFIG_64BIT) 26 + .quad \val, \name 27 + #else 28 + .long \val, \name 29 + #endif 30 + .endm 19 31 20 32 /* 21 33 * note on .section use: @progbits vs %progbits nastiness doesn't matter,
+19
include/linux/compiler.h
··· 280 280 281 281 #endif /* __KERNEL__ */ 282 282 283 + /* 284 + * Force the compiler to emit 'sym' as a symbol, so that we can reference 285 + * it from inline assembler. Necessary in case 'sym' could be inlined 286 + * otherwise, or eliminated entirely due to lack of references that are 287 + * visible to the compiler. 288 + */ 289 + #define __ADDRESSABLE(sym) \ 290 + static void * __attribute__((section(".discard.addressable"), used)) \ 291 + __PASTE(__addressable_##sym, __LINE__) = (void *)&sym; 292 + 293 + /** 294 + * offset_to_ptr - convert a relative memory offset to an absolute pointer 295 + * @off: the address of the 32-bit offset value 296 + */ 297 + static inline void *offset_to_ptr(const int *off) 298 + { 299 + return (void *)((unsigned long)off + *off); 300 + } 301 + 283 302 #endif /* __ASSEMBLY__ */ 284 303 285 304 #ifndef __optimize
+35 -11
include/linux/export.h
··· 18 18 #define VMLINUX_SYMBOL_STR(x) __VMLINUX_SYMBOL_STR(x) 19 19 20 20 #ifndef __ASSEMBLY__ 21 - struct kernel_symbol 22 - { 23 - unsigned long value; 24 - const char *name; 25 - }; 26 - 27 21 #ifdef MODULE 28 22 extern struct module __this_module; 29 23 #define THIS_MODULE (&__this_module) ··· 48 54 #define __CRC_SYMBOL(sym, sec) 49 55 #endif 50 56 57 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 58 + #include <linux/compiler.h> 59 + /* 60 + * Emit the ksymtab entry as a pair of relative references: this reduces 61 + * the size by half on 64-bit architectures, and eliminates the need for 62 + * absolute relocations that require runtime processing on relocatable 63 + * kernels. 64 + */ 65 + #define __KSYMTAB_ENTRY(sym, sec) \ 66 + __ADDRESSABLE(sym) \ 67 + asm(" .section \"___ksymtab" sec "+" #sym "\", \"a\" \n" \ 68 + " .balign 8 \n" \ 69 + "__ksymtab_" #sym ": \n" \ 70 + " .long " #sym "- . \n" \ 71 + " .long __kstrtab_" #sym "- . \n" \ 72 + " .previous \n") 73 + 74 + struct kernel_symbol { 75 + int value_offset; 76 + int name_offset; 77 + }; 78 + #else 79 + #define __KSYMTAB_ENTRY(sym, sec) \ 80 + static const struct kernel_symbol __ksymtab_##sym \ 81 + __attribute__((section("___ksymtab" sec "+" #sym), used)) \ 82 + = { (unsigned long)&sym, __kstrtab_##sym } 83 + 84 + struct kernel_symbol { 85 + unsigned long value; 86 + const char *name; 87 + }; 88 + #endif 89 + 51 90 /* For every exported symbol, place a struct in the __ksymtab section */ 52 91 #define ___EXPORT_SYMBOL(sym, sec) \ 53 92 extern typeof(sym) sym; \ 54 93 __CRC_SYMBOL(sym, sec) \ 55 94 static const char __kstrtab_##sym[] \ 56 - __attribute__((section("__ksymtab_strings"), aligned(1))) \ 95 + __attribute__((section("__ksymtab_strings"), used, aligned(1))) \ 57 96 = #sym; \ 58 - static const struct kernel_symbol __ksymtab_##sym \ 59 - __used \ 60 - __attribute__((section("___ksymtab" sec "+" #sym), used)) \ 61 - = { (unsigned long)&sym, __kstrtab_##sym } 97 + __KSYMTAB_ENTRY(sym, sec) 62 98 63 99 #if defined(__DISABLE_EXPORTS) 64 100
+26 -6
kernel/module.c
··· 529 529 return true; 530 530 } 531 531 532 + static unsigned long kernel_symbol_value(const struct kernel_symbol *sym) 533 + { 534 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 535 + return (unsigned long)offset_to_ptr(&sym->value_offset); 536 + #else 537 + return sym->value; 538 + #endif 539 + } 540 + 541 + static const char *kernel_symbol_name(const struct kernel_symbol *sym) 542 + { 543 + #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS 544 + return offset_to_ptr(&sym->name_offset); 545 + #else 546 + return sym->name; 547 + #endif 548 + } 549 + 532 550 static int cmp_name(const void *va, const void *vb) 533 551 { 534 552 const char *a; 535 553 const struct kernel_symbol *b; 536 554 a = va; b = vb; 537 - return strcmp(a, b->name); 555 + return strcmp(a, kernel_symbol_name(b)); 538 556 } 539 557 540 558 static bool find_symbol_in_section(const struct symsearch *syms, ··· 2188 2170 sym = NULL; 2189 2171 preempt_enable(); 2190 2172 2191 - return sym ? (void *)sym->value : NULL; 2173 + return sym ? (void *)kernel_symbol_value(sym) : NULL; 2192 2174 } 2193 2175 EXPORT_SYMBOL_GPL(__symbol_get); 2194 2176 ··· 2218 2200 2219 2201 for (i = 0; i < ARRAY_SIZE(arr); i++) { 2220 2202 for (s = arr[i].sym; s < arr[i].sym + arr[i].num; s++) { 2221 - if (find_symbol(s->name, &owner, NULL, true, false)) { 2203 + if (find_symbol(kernel_symbol_name(s), &owner, NULL, 2204 + true, false)) { 2222 2205 pr_err("%s: exports duplicate symbol %s" 2223 2206 " (owned by %s)\n", 2224 - mod->name, s->name, module_name(owner)); 2207 + mod->name, kernel_symbol_name(s), 2208 + module_name(owner)); 2225 2209 return -ENOEXEC; 2226 2210 } 2227 2211 } ··· 2272 2252 ksym = resolve_symbol_wait(mod, info, name); 2273 2253 /* Ok if resolved. */ 2274 2254 if (ksym && !IS_ERR(ksym)) { 2275 - sym[i].st_value = ksym->value; 2255 + sym[i].st_value = kernel_symbol_value(ksym); 2276 2256 break; 2277 2257 } 2278 2258 ··· 2536 2516 ks = lookup_symbol(name, __start___ksymtab, __stop___ksymtab); 2537 2517 else 2538 2518 ks = lookup_symbol(name, mod->syms, mod->syms + mod->num_syms); 2539 - return ks != NULL && ks->value == value; 2519 + return ks != NULL && kernel_symbol_value(ks) == value; 2540 2520 } 2541 2521 2542 2522 /* As per nm */