KSYM_SYMBOL_LEN fixes

Miles Lane tailing /sys files hit a BUG which Pekka Enberg has tracked
to my 966c8c12dc9e77f931e2281ba25d2f0244b06949 sprint_symbol(): use
less stack exposing a bug in slub's list_locations() -
kallsyms_lookup() writes a 0 to namebuf[KSYM_NAME_LEN-1], but that was
beyond the end of page provided.

The 100 slop which list_locations() allows at end of page looks roughly
enough for all the other stuff it might print after the symbol before
it checks again: break out KSYM_SYMBOL_LEN earlier than before.

Latencytop and ftrace and are using KSYM_NAME_LEN buffers where they
need KSYM_SYMBOL_LEN buffers, and vmallocinfo a 2*KSYM_NAME_LEN buffer
where it wants a KSYM_SYMBOL_LEN buffer: fix those before anyone copies
them.

[akpm@linux-foundation.org: ftrace.h needs module.h]
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc Miles Lane <miles.lane@gmail.com>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Steven Rostedt <srostedt@redhat.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Hugh Dickins and committed by Linus Torvalds 9c246247 6ee5a399

+6 -5
+1 -1
fs/proc/base.c
··· 371 371 task->latency_record[i].time, 372 372 task->latency_record[i].max); 373 373 for (q = 0; q < LT_BACKTRACEDEPTH; q++) { 374 - char sym[KSYM_NAME_LEN]; 374 + char sym[KSYM_SYMBOL_LEN]; 375 375 char *c; 376 376 if (!task->latency_record[i].backtrace[q]) 377 377 break;
+2 -1
include/linux/ftrace.h
··· 6 6 #include <linux/ktime.h> 7 7 #include <linux/init.h> 8 8 #include <linux/types.h> 9 + #include <linux/module.h> 9 10 #include <linux/kallsyms.h> 10 11 11 12 #ifdef CONFIG_FUNCTION_TRACER ··· 232 231 233 232 struct boot_trace { 234 233 pid_t caller; 235 - char func[KSYM_NAME_LEN]; 234 + char func[KSYM_SYMBOL_LEN]; 236 235 int result; 237 236 unsigned long long duration; /* usecs */ 238 237 ktime_t calltime;
+1 -1
kernel/latencytop.c
··· 191 191 latency_record[i].time, 192 192 latency_record[i].max); 193 193 for (q = 0; q < LT_BACKTRACEDEPTH; q++) { 194 - char sym[KSYM_NAME_LEN]; 194 + char sym[KSYM_SYMBOL_LEN]; 195 195 char *c; 196 196 if (!latency_record[i].backtrace[q]) 197 197 break;
+1 -1
mm/slub.c
··· 3597 3597 for (i = 0; i < t.count; i++) { 3598 3598 struct location *l = &t.loc[i]; 3599 3599 3600 - if (len > PAGE_SIZE - 100) 3600 + if (len > PAGE_SIZE - KSYM_SYMBOL_LEN - 100) 3601 3601 break; 3602 3602 len += sprintf(buf + len, "%7ld ", l->count); 3603 3603
+1 -1
mm/vmalloc.c
··· 1717 1717 v->addr, v->addr + v->size, v->size); 1718 1718 1719 1719 if (v->caller) { 1720 - char buff[2 * KSYM_NAME_LEN]; 1720 + char buff[KSYM_SYMBOL_LEN]; 1721 1721 1722 1722 seq_putc(m, ' '); 1723 1723 sprint_symbol(buff, (unsigned long)v->caller);