Merge tag 'trace-fixes-3.13-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing fix from Steven Rostedt:
"A regression showed up that there's a large delay when enabling all
events. This was prevalent when FTRACE_SELFTEST was enabled which
enables all events several times, and caused the system bootup to
pause for over a minute.

This was tracked down to an addition of a synchronize_sched()
performed when system call tracepoints are unregistered.

The synchronize_sched() is needed between the unregistering of the
system call tracepoint and a deletion of a tracing instance buffer.
But placing the synchronize_sched() in the unreg of *every* system
call tracepoint is a bit overboard. A single synchronize_sched()
before the deletion of the instance is sufficient"

* tag 'trace-fixes-3.13-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Only run synchronize_sched() at instance deletion time

Changed files
+3 -10
kernel
+3
kernel/trace/trace_events.c
··· 2314 2314 /* Disable any running events */ 2315 2315 __ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0); 2316 2316 2317 + /* Access to events are within rcu_read_lock_sched() */ 2318 + synchronize_sched(); 2319 + 2317 2320 down_write(&trace_event_sem); 2318 2321 __trace_remove_event_dirs(tr); 2319 2322 debugfs_remove_recursive(tr->event_dir);
-10
kernel/trace/trace_syscalls.c
··· 431 431 if (!tr->sys_refcount_enter) 432 432 unregister_trace_sys_enter(ftrace_syscall_enter, tr); 433 433 mutex_unlock(&syscall_trace_lock); 434 - /* 435 - * Callers expect the event to be completely disabled on 436 - * return, so wait for current handlers to finish. 437 - */ 438 - synchronize_sched(); 439 434 } 440 435 441 436 static int reg_event_syscall_exit(struct ftrace_event_file *file, ··· 469 474 if (!tr->sys_refcount_exit) 470 475 unregister_trace_sys_exit(ftrace_syscall_exit, tr); 471 476 mutex_unlock(&syscall_trace_lock); 472 - /* 473 - * Callers expect the event to be completely disabled on 474 - * return, so wait for current handlers to finish. 475 - */ 476 - synchronize_sched(); 477 477 } 478 478 479 479 static int __init init_syscall_trace(struct ftrace_event_call *call)