Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

powerpc/64: Handle linker stubs in low .text code

Very large kernels may require linker stubs for branches from HEAD
text code. The linker may place these stubs before the HEAD text
sections, which breaks the assumption that HEAD text is located at 0
(or the .text section being located at 0x7000/0x8000 on Book3S
kernels).

Provide an option to create a small section just before the .text
section with an empty 256 - 4 bytes, and adjust the start of the .text
section to match. The linker will tend to put stubs in that section
and not break our relative-to-absolute offset assumptions.

This causes a small waste of space on common kernels, but allows large
kernels to build and boot. For now, it is an EXPERT config option,
defaulting to =n, but a reference is provided for it in the build-time
check for such breakage. This is good enough for allyesconfig and
custom users / hackers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

authored by

Nicholas Piggin and committed by
Michael Ellerman
951eedeb 4ea80652

+34
+11
arch/powerpc/Kconfig
··· 455 455 ---help--- 456 456 Support user-mode Transactional Memory on POWERPC. 457 457 458 + config LD_HEAD_STUB_CATCH 459 + bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT 460 + depends on PPC64 461 + default n 462 + help 463 + Very large kernels can cause linker branch stubs to be generated by 464 + code in head_64.S, which moves the head text sections out of their 465 + specified location. This option can work around the problem. 466 + 467 + If unsure, say "N". 468 + 458 469 config DISABLE_MPROFILE_KERNEL 459 470 bool "Disable use of mprofile-kernel for kernel tracing" 460 471 depends on PPC64 && CPU_LITTLE_ENDIAN
+18
arch/powerpc/include/asm/head-64.h
··· 63 63 . = 0x0; \ 64 64 start_##sname: 65 65 66 + /* 67 + * .linker_stub_catch section is used to catch linker stubs from being 68 + * inserted in our .text section, above the start_text label (which breaks 69 + * the ABS_ADDR calculation). See kernel/vmlinux.lds.S and tools/head_check.sh 70 + * for more details. We would prefer to just keep a cacheline (0x80), but 71 + * 0x100 seems to be how the linker aligns branch stub groups. 72 + */ 73 + #ifdef CONFIG_LD_HEAD_STUB_CATCH 74 + #define OPEN_TEXT_SECTION(start) \ 75 + .section ".linker_stub_catch","ax",@progbits; \ 76 + linker_stub_catch: \ 77 + . = 0x4; \ 78 + text_start = (start) + 0x100; \ 79 + .section ".text","ax",@progbits; \ 80 + .balign 0x100; \ 81 + start_text: 82 + #else 66 83 #define OPEN_TEXT_SECTION(start) \ 67 84 text_start = (start); \ 68 85 .section ".text","ax",@progbits; \ 69 86 . = 0x0; \ 70 87 start_text: 88 + #endif 71 89 72 90 #define ZERO_FIXED_SECTION(sname, start, end) \ 73 91 sname##_start = (start); \
+5
arch/powerpc/kernel/vmlinux.lds.S
··· 103 103 * section placement to work. 104 104 */ 105 105 .text BLOCK(0) : AT(ADDR(.text) - LOAD_OFFSET) { 106 + #ifdef CONFIG_LD_HEAD_STUB_CATCH 107 + *(.linker_stub_catch); 108 + . = . ; 109 + #endif 110 + 106 111 #else 107 112 .text : AT(ADDR(.text) - LOAD_OFFSET) { 108 113 ALIGN_FUNCTION();