Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/mm/pti: Make pti_clone_kernel_text() compile on 32 bit

The pti_clone_kernel_text() function references __end_rodata_hpage_align,
which is only present on x86-64. This makes sense as the end of the rodata
section is not huge-page aligned on 32 bit.

Nevertheless a symbol is required for the function that points at the right
address for both 32 and 64 bit. Introduce __end_rodata_aligned for that
purpose and use it in pti_clone_kernel_text().

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Pavel Machek <pavel@ucw.cz>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: linux-mm@kvack.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eduardo Valentin <eduval@amazon.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Waiman Long <llong@redhat.com>
Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
Cc: joro@8bytes.org
Link: https://lkml.kernel.org/r/1531906876-13451-28-git-send-email-joro@8bytes.org

authored by

Joerg Roedel and committed by
Thomas Gleixner
39d668e0 f94560cd

+12 -8
+1
arch/x86/include/asm/sections.h
··· 7 7 8 8 extern char __brk_base[], __brk_limit[]; 9 9 extern struct exception_table_entry __stop___ex_table[]; 10 + extern char __end_rodata_aligned[]; 10 11 11 12 #if defined(CONFIG_X86_64) 12 13 extern char __end_rodata_hpage_align[];
+10 -7
arch/x86/kernel/vmlinux.lds.S
··· 55 55 * so we can enable protection checks as well as retain 2MB large page 56 56 * mappings for kernel text. 57 57 */ 58 - #define X64_ALIGN_RODATA_BEGIN . = ALIGN(HPAGE_SIZE); 58 + #define X86_ALIGN_RODATA_BEGIN . = ALIGN(HPAGE_SIZE); 59 59 60 - #define X64_ALIGN_RODATA_END \ 60 + #define X86_ALIGN_RODATA_END \ 61 61 . = ALIGN(HPAGE_SIZE); \ 62 - __end_rodata_hpage_align = .; 62 + __end_rodata_hpage_align = .; \ 63 + __end_rodata_aligned = .; 63 64 64 65 #define ALIGN_ENTRY_TEXT_BEGIN . = ALIGN(PMD_SIZE); 65 66 #define ALIGN_ENTRY_TEXT_END . = ALIGN(PMD_SIZE); 66 67 67 68 #else 68 69 69 - #define X64_ALIGN_RODATA_BEGIN 70 - #define X64_ALIGN_RODATA_END 70 + #define X86_ALIGN_RODATA_BEGIN 71 + #define X86_ALIGN_RODATA_END \ 72 + . = ALIGN(PAGE_SIZE); \ 73 + __end_rodata_aligned = .; 71 74 72 75 #define ALIGN_ENTRY_TEXT_BEGIN 73 76 #define ALIGN_ENTRY_TEXT_END ··· 144 141 145 142 /* .text should occupy whole number of pages */ 146 143 . = ALIGN(PAGE_SIZE); 147 - X64_ALIGN_RODATA_BEGIN 144 + X86_ALIGN_RODATA_BEGIN 148 145 RO_DATA(PAGE_SIZE) 149 - X64_ALIGN_RODATA_END 146 + X86_ALIGN_RODATA_END 150 147 151 148 /* Data */ 152 149 .data : AT(ADDR(.data) - LOAD_OFFSET) {
+1 -1
arch/x86/mm/pti.c
··· 470 470 * clone the areas past rodata, they might contain secrets. 471 471 */ 472 472 unsigned long start = PFN_ALIGN(_text); 473 - unsigned long end = (unsigned long)__end_rodata_hpage_align; 473 + unsigned long end = (unsigned long)__end_rodata_aligned; 474 474 475 475 if (!pti_kernel_image_global_ok()) 476 476 return;