vmlinux.lds.h: restructure BSS linker script macros

The BSS section macros in vmlinux.lds.h currently place the .sbss
input section outside the bounds of [__bss_start, __bss_end]. On all
architectures except for microblaze that handle both .sbss and
__bss_start/__bss_end, this is wrong: the .sbss input section is
within the range [__bss_start, __bss_end]. Relatedly, the example
code at the top of the file actually has __bss_start/__bss_end defined
twice; I believe the right fix here is to define them in the
BSS_SECTION macro but not in the BSS macro.

Another problem with the current macros is that several
architectures have an ALIGN(4) or some other small number just before
__bss_stop in their linker scripts. The BSS_SECTION macro currently
hardcodes this to 4; while it should really be an argument. It also
ignores its sbss_align argument; fix that.

mn10300 is the only user at present of any of the macros touched by
this patch. It looks like mn10300 actually was incorrectly converted
to use the new BSS() macro (the alignment of 4 prior to conversion was
a __bss_stop alignment, but the argument to the BSS macro is a start
alignment). So fix this as well.

I'd like acks from Sam and David on this one. Also CCing Paul, since
he has a patch from me which will need to be updated to use
BSS_SECTION(0, PAGE_SIZE, 4) once this gets merged.

Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>

authored by Tim Abbott and committed by Sam Ravnborg 04e448d9 d0e1e095

+10 -11
+1 -1
arch/mn10300/kernel/vmlinux.lds.S
··· 107 107 __init_end = .; 108 108 /* freed after init ends here */ 109 109 110 - BSS(4) 110 + BSS_SECTION(0, PAGE_SIZE, 4) 111 111 112 112 _end = . ; 113 113
+9 -10
include/asm-generic/vmlinux.lds.h
··· 30 30 * EXCEPTION_TABLE(...) 31 31 * NOTES 32 32 * 33 - * __bss_start = .; 34 - * BSS_SECTION(0, 0) 35 - * __bss_stop = .; 33 + * BSS_SECTION(0, 0, 0) 36 34 * _end = .; 37 35 * 38 36 * /DISCARD/ : { ··· 487 489 * bss (Block Started by Symbol) - uninitialized data 488 490 * zeroed during startup 489 491 */ 490 - #define SBSS \ 492 + #define SBSS(sbss_align) \ 493 + . = ALIGN(sbss_align); \ 491 494 .sbss : AT(ADDR(.sbss) - LOAD_OFFSET) { \ 492 495 *(.sbss) \ 493 496 *(.scommon) \ ··· 497 498 #define BSS(bss_align) \ 498 499 . = ALIGN(bss_align); \ 499 500 .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \ 500 - VMLINUX_SYMBOL(__bss_start) = .; \ 501 501 *(.bss.page_aligned) \ 502 502 *(.dynbss) \ 503 503 *(.bss) \ 504 504 *(COMMON) \ 505 - VMLINUX_SYMBOL(__bss_stop) = .; \ 506 505 } 507 506 508 507 /* ··· 732 735 INIT_RAM_FS \ 733 736 } 734 737 735 - #define BSS_SECTION(sbss_align, bss_align) \ 736 - SBSS \ 738 + #define BSS_SECTION(sbss_align, bss_align, stop_align) \ 739 + . = ALIGN(sbss_align); \ 740 + VMLINUX_SYMBOL(__bss_start) = .; \ 741 + SBSS(sbss_align) \ 737 742 BSS(bss_align) \ 738 - . = ALIGN(4); 739 - 743 + . = ALIGN(stop_align); \ 744 + VMLINUX_SYMBOL(__bss_stop) = .;