Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

tile: normalize global variables exported by vmlinux.lds

Normalize global variables exported by vmlinux.lds to conform usage
guidelines from include/asm-generic/sections.h.

1) Use _text to mark the start of the kernel image including the head
text, and _stext to mark the start of the .text section.
2) Export mandatory global variables __init_begin and __init_end.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Jiang Liu and committed by
Linus Torvalds
40a3b8df ae49b83d

+7 -5
+1 -1
arch/tile/include/asm/sections.h
··· 34 34 extern char __start_atomic_asm_code[], __end_atomic_asm_code[]; 35 35 #endif 36 36 37 - /* Handle the discontiguity between _sdata and _stext. */ 37 + /* Handle the discontiguity between _sdata and _text. */ 38 38 static inline int arch_is_kernel_data(unsigned long addr) 39 39 { 40 40 return addr >= (unsigned long)_sdata &&
+2 -2
arch/tile/kernel/setup.c
··· 307 307 hv_store_mapping(addr, pages << PAGE_SHIFT, pa); 308 308 } 309 309 310 - hv_store_mapping((HV_VirtAddr)_stext, 311 - (uint32_t)(_einittext - _stext), 0); 310 + hv_store_mapping((HV_VirtAddr)_text, 311 + (uint32_t)(_einittext - _text), 0); 312 312 } 313 313 314 314 /*
+3 -1
arch/tile/kernel/vmlinux.lds.S
··· 27 27 .intrpt1 (LOAD_OFFSET) : AT ( 0 ) /* put at the start of physical memory */ 28 28 { 29 29 _text = .; 30 - _stext = .; 31 30 *(.intrpt1) 32 31 } :intrpt1 =0 33 32 ··· 35 36 36 37 /* Now the real code */ 37 38 . = ALIGN(0x20000); 39 + _stext = .; 38 40 .text : AT (ADDR(.text) - LOAD_OFFSET) { 39 41 HEAD_TEXT 40 42 SCHED_TEXT ··· 58 58 #define LOAD_OFFSET PAGE_OFFSET 59 59 60 60 . = ALIGN(PAGE_SIZE); 61 + __init_begin = .; 61 62 VMLINUX_SYMBOL(_sinitdata) = .; 62 63 INIT_DATA_SECTION(16) :data =0 63 64 PERCPU_SECTION(L2_CACHE_BYTES) 64 65 . = ALIGN(PAGE_SIZE); 65 66 VMLINUX_SYMBOL(_einitdata) = .; 67 + __init_end = .; 66 68 67 69 _sdata = .; /* Start of data section */ 68 70
+1 -1
arch/tile/mm/init.c
··· 562 562 prot = ktext_set_nocache(prot); 563 563 } 564 564 565 - BUG_ON(address != (unsigned long)_stext); 565 + BUG_ON(address != (unsigned long)_text); 566 566 pte = NULL; 567 567 for (; address < (unsigned long)_einittext; 568 568 pfn++, address += PAGE_SIZE) {