Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Rename .data.cacheline_aligned to .data..cacheline_aligned.

Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>

authored by

Tim Abbott and committed by
Michal Marek
4af57b78 bc75cc6b

+4 -4
+1 -1
arch/powerpc/kernel/vmlinux.lds.S
··· 231 231 PAGE_ALIGNED_DATA(PAGE_SIZE) 232 232 } 233 233 234 - .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) { 234 + .data..cacheline_aligned : AT(ADDR(.data..cacheline_aligned) - LOAD_OFFSET) { 235 235 CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES) 236 236 } 237 237
+1 -1
arch/x86/kernel/init_task.c
··· 34 34 /* 35 35 * per-CPU TSS segments. Threads are completely 'soft' on Linux, 36 36 * no more per-task TSS's. The TSS size is kept cacheline-aligned 37 - * so they are allowed to end up in the .data.cacheline_aligned 37 + * so they are allowed to end up in the .data..cacheline_aligned 38 38 * section. Since TSS's are completely CPU-local, we want them 39 39 * on exact cacheline boundaries, to eliminate cacheline ping-pong. 40 40 */
+1 -1
include/asm-generic/vmlinux.lds.h
··· 189 189 190 190 #define CACHELINE_ALIGNED_DATA(align) \ 191 191 . = ALIGN(align); \ 192 - *(.data.cacheline_aligned) 192 + *(.data..cacheline_aligned) 193 193 194 194 #define INIT_TASK_DATA(align) \ 195 195 . = ALIGN(align); \
+1 -1
include/linux/cache.h
··· 31 31 #ifndef __cacheline_aligned 32 32 #define __cacheline_aligned \ 33 33 __attribute__((__aligned__(SMP_CACHE_BYTES), \ 34 - __section__(".data.cacheline_aligned"))) 34 + __section__(".data..cacheline_aligned"))) 35 35 #endif /* __cacheline_aligned */ 36 36 37 37 #ifndef __cacheline_aligned_in_smp