Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/asm: Do not annotate functions with GLOBAL

GLOBAL is an x86's custom macro and is going to die very soon. It was
meant for global symbols, but here, it was used for functions. Instead,
use the new macros SYM_FUNC_START* and SYM_CODE_START* (depending on the
type of the function) which are dedicated to global functions. And since
they both require a closing by SYM_*_END, do that here too.

startup_64, which does not use GLOBAL but uses .globl explicitly, is
converted too.

"No alignments" are preserved.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Allison Randal <allison@lohutok.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Cao jin <caoj.fnst@cn.fujitsu.com>
Cc: Enrico Weigelt <info@metux.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-arch@vger.kernel.org
Cc: Maran Wilson <maran.wilson@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20191011115108.12392-17-jslaby@suse.cz

authored by

Jiri Slaby and committed by
Borislav Petkov
37818afd b16fed65

+13 -12
+8 -8
arch/x86/boot/copy.S
··· 15 15 .code16 16 16 .text 17 17 18 - GLOBAL(memcpy) 18 + SYM_FUNC_START_NOALIGN(memcpy) 19 19 pushw %si 20 20 pushw %di 21 21 movw %ax, %di ··· 29 29 popw %di 30 30 popw %si 31 31 retl 32 - ENDPROC(memcpy) 32 + SYM_FUNC_END(memcpy) 33 33 34 - GLOBAL(memset) 34 + SYM_FUNC_START_NOALIGN(memset) 35 35 pushw %di 36 36 movw %ax, %di 37 37 movzbl %dl, %eax ··· 44 44 rep; stosb 45 45 popw %di 46 46 retl 47 - ENDPROC(memset) 47 + SYM_FUNC_END(memset) 48 48 49 - GLOBAL(copy_from_fs) 49 + SYM_FUNC_START_NOALIGN(copy_from_fs) 50 50 pushw %ds 51 51 pushw %fs 52 52 popw %ds 53 53 calll memcpy 54 54 popw %ds 55 55 retl 56 - ENDPROC(copy_from_fs) 56 + SYM_FUNC_END(copy_from_fs) 57 57 58 - GLOBAL(copy_to_fs) 58 + SYM_FUNC_START_NOALIGN(copy_to_fs) 59 59 pushw %es 60 60 pushw %fs 61 61 popw %es 62 62 calll memcpy 63 63 popw %es 64 64 retl 65 - ENDPROC(copy_to_fs) 65 + SYM_FUNC_END(copy_to_fs)
+2 -2
arch/x86/boot/pmjump.S
··· 21 21 /* 22 22 * void protected_mode_jump(u32 entrypoint, u32 bootparams); 23 23 */ 24 - GLOBAL(protected_mode_jump) 24 + SYM_FUNC_START_NOALIGN(protected_mode_jump) 25 25 movl %edx, %esi # Pointer to boot_params table 26 26 27 27 xorl %ebx, %ebx ··· 42 42 .byte 0x66, 0xea # ljmpl opcode 43 43 2: .long .Lin_pm32 # offset 44 44 .word __BOOT_CS # segment 45 - ENDPROC(protected_mode_jump) 45 + SYM_FUNC_END(protected_mode_jump) 46 46 47 47 .code32 48 48 .section ".text32","ax"
+3 -2
arch/x86/kernel/head_64.S
··· 49 49 .text 50 50 __HEAD 51 51 .code64 52 - .globl startup_64 53 - startup_64: 52 + SYM_CODE_START_NOALIGN(startup_64) 54 53 UNWIND_HINT_EMPTY 55 54 /* 56 55 * At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0, ··· 89 90 /* Form the CR3 value being sure to include the CR3 modifier */ 90 91 addq $(early_top_pgt - __START_KERNEL_map), %rax 91 92 jmp 1f 93 + SYM_CODE_END(startup_64) 94 + 92 95 ENTRY(secondary_startup_64) 93 96 UNWIND_HINT_EMPTY 94 97 /*