Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

tracing: Always use canonical ftrace path

The canonical location for the tracefs filesystem is at /sys/kernel/tracing.

But, from Documentation/trace/ftrace.rst:

Before 4.1, all ftrace tracing control files were within the debugfs
file system, which is typically located at /sys/kernel/debug/tracing.
For backward compatibility, when mounting the debugfs file system,
the tracefs file system will be automatically mounted at:

/sys/kernel/debug/tracing

Many comments and Kconfig help messages in the tracing code still refer
to this older debugfs path, so let's update them to avoid confusion.

Link: https://lore.kernel.org/linux-trace-kernel/20230215223350.2658616-2-zwisler@google.com

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Reviewed-by: Mukesh Ojha <quic_mojha@quicinc.com>
Signed-off-by: Ross Zwisler <zwisler@google.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>

authored by

Ross Zwisler and committed by
Steven Rostedt (Google)
2455f0e1 d8f0ae3e

+25 -25
+1 -1
include/linux/kernel.h
··· 297 297 * 298 298 * Use tracing_on/tracing_off when you want to quickly turn on or off 299 299 * tracing. It simply enables or disables the recording of the trace events. 300 - * This also corresponds to the user space /sys/kernel/debug/tracing/tracing_on 300 + * This also corresponds to the user space /sys/kernel/tracing/tracing_on 301 301 * file, which gives a means for the kernel and userspace to interact. 302 302 * Place a tracing_off() in the kernel where you want tracing to end. 303 303 * From user space, examine the trace, and then echo 1 > tracing_on
+2 -2
include/linux/tracepoint.h
··· 471 471 * * This is how the trace record is structured and will 472 472 * * be saved into the ring buffer. These are the fields 473 473 * * that will be exposed to user-space in 474 - * * /sys/kernel/debug/tracing/events/<*>/format. 474 + * * /sys/kernel/tracing/events/<*>/format. 475 475 * * 476 476 * * The declared 'local variable' is called '__entry' 477 477 * * ··· 531 531 * tracepoint callback (this is used by programmatic plugins and 532 532 * can also by used by generic instrumentation like SystemTap), and 533 533 * it is also used to expose a structured trace record in 534 - * /sys/kernel/debug/tracing/events/. 534 + * /sys/kernel/tracing/events/. 535 535 * 536 536 * A set of (un)registration functions can be passed to the variant 537 537 * TRACE_EVENT_FN to perform any (un)registration work.
+10 -10
kernel/trace/Kconfig
··· 239 239 enabled, and the functions not enabled will not affect 240 240 performance of the system. 241 241 242 - See the files in /sys/kernel/debug/tracing: 242 + See the files in /sys/kernel/tracing: 243 243 available_filter_functions 244 244 set_ftrace_filter 245 245 set_ftrace_notrace ··· 299 299 select KALLSYMS 300 300 help 301 301 This special tracer records the maximum stack footprint of the 302 - kernel and displays it in /sys/kernel/debug/tracing/stack_trace. 302 + kernel and displays it in /sys/kernel/tracing/stack_trace. 303 303 304 304 This tracer works by hooking into every function call that the 305 305 kernel executes, and keeping a maximum stack depth value and ··· 339 339 disabled by default and can be runtime (re-)started 340 340 via: 341 341 342 - echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 342 + echo 0 > /sys/kernel/tracing/tracing_max_latency 343 343 344 344 (Note that kernel size and overhead increase with this option 345 345 enabled. This option and the preempt-off timing option can be ··· 363 363 disabled by default and can be runtime (re-)started 364 364 via: 365 365 366 - echo 0 > /sys/kernel/debug/tracing/tracing_max_latency 366 + echo 0 > /sys/kernel/tracing/tracing_max_latency 367 367 368 368 (Note that kernel size and overhead increase with this option 369 369 enabled. This option and the irqs-off timing option can be ··· 515 515 Allow tracing users to take snapshot of the current buffer using the 516 516 ftrace interface, e.g.: 517 517 518 - echo 1 > /sys/kernel/debug/tracing/snapshot 518 + echo 1 > /sys/kernel/tracing/snapshot 519 519 cat snapshot 520 520 521 521 config TRACER_SNAPSHOT_PER_CPU_SWAP ··· 527 527 full swap (all buffers). If this is set, then the following is 528 528 allowed: 529 529 530 - echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot 530 + echo 1 > /sys/kernel/tracing/per_cpu/cpu2/snapshot 531 531 532 532 After which, only the tracing buffer for CPU 2 was swapped with 533 533 the main tracing buffer, and the other CPU buffers remain the same. ··· 574 574 This tracer profiles all likely and unlikely macros 575 575 in the kernel. It will display the results in: 576 576 577 - /sys/kernel/debug/tracing/trace_stat/branch_annotated 577 + /sys/kernel/tracing/trace_stat/branch_annotated 578 578 579 579 Note: this will add a significant overhead; only turn this 580 580 on if you need to profile the system's use of these macros. ··· 587 587 taken in the kernel is recorded whether it hit or miss. 588 588 The results will be displayed in: 589 589 590 - /sys/kernel/debug/tracing/trace_stat/branch_all 590 + /sys/kernel/tracing/trace_stat/branch_all 591 591 592 592 This option also enables the likely/unlikely profiler. 593 593 ··· 638 638 Tracing also is possible using the ftrace interface, e.g.: 639 639 640 640 echo 1 > /sys/block/sda/sda1/trace/enable 641 - echo blk > /sys/kernel/debug/tracing/current_tracer 642 - cat /sys/kernel/debug/tracing/trace_pipe 641 + echo blk > /sys/kernel/tracing/current_tracer 642 + cat /sys/kernel/tracing/trace_pipe 643 643 644 644 If unsure, say N. 645 645
+1 -1
kernel/trace/kprobe_event_gen_test.c
··· 21 21 * Then: 22 22 * 23 23 * # insmod kernel/trace/kprobe_event_gen_test.ko 24 - * # cat /sys/kernel/debug/tracing/trace 24 + * # cat /sys/kernel/tracing/trace 25 25 * 26 26 * You should see many instances of the "gen_kprobe_test" and 27 27 * "gen_kretprobe_test" events in the trace buffer.
+1 -1
kernel/trace/ring_buffer.c
··· 2886 2886 sched_clock_stable() ? "" : 2887 2887 "If you just came from a suspend/resume,\n" 2888 2888 "please switch to the trace global clock:\n" 2889 - " echo global > /sys/kernel/debug/tracing/trace_clock\n" 2889 + " echo global > /sys/kernel/tracing/trace_clock\n" 2890 2890 "or add trace_clock=global to the kernel command line\n"); 2891 2891 } 2892 2892
+1 -1
kernel/trace/synth_event_gen_test.c
··· 22 22 * Then: 23 23 * 24 24 * # insmod kernel/trace/synth_event_gen_test.ko 25 - * # cat /sys/kernel/debug/tracing/trace 25 + * # cat /sys/kernel/tracing/trace 26 26 * 27 27 * You should see several events in the trace buffer - 28 28 * "create_synth_test", "empty_synth_test", and several instances of
+1 -1
kernel/trace/trace.c
··· 1187 1187 * 1188 1188 * Note, make sure to allocate the snapshot with either 1189 1189 * a tracing_snapshot_alloc(), or by doing it manually 1190 - * with: echo 1 > /sys/kernel/debug/tracing/snapshot 1190 + * with: echo 1 > /sys/kernel/tracing/snapshot 1191 1191 * 1192 1192 * If the snapshot buffer is not allocated, it will stop tracing. 1193 1193 * Basically making a permanent snapshot.
+2 -2
samples/user_events/example.c
··· 23 23 #endif 24 24 25 25 /* Assumes debugfs is mounted */ 26 - const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; 27 - const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; 26 + const char *data_file = "/sys/kernel/tracing/user_events_data"; 27 + const char *status_file = "/sys/kernel/tracing/user_events_status"; 28 28 29 29 static int event_status(long **status) 30 30 {
+3 -3
scripts/tracing/draw_functrace.py
··· 12 12 13 13 Usage: 14 14 Be sure that you have CONFIG_FUNCTION_TRACER 15 - # mount -t debugfs nodev /sys/kernel/debug 16 - # echo function > /sys/kernel/debug/tracing/current_tracer 17 - $ cat /sys/kernel/debug/tracing/trace_pipe > ~/raw_trace_func 15 + # mount -t tracefs nodev /sys/kernel/tracing 16 + # echo function > /sys/kernel/tracing/current_tracer 17 + $ cat /sys/kernel/tracing/trace_pipe > ~/raw_trace_func 18 18 Wait some times but not too much, the script is a bit slow. 19 19 Break the pipe (Ctrl + Z) 20 20 $ scripts/tracing/draw_functrace.py < ~/raw_trace_func > draw_functrace
+2 -2
tools/lib/api/fs/tracing_path.c
··· 14 14 #include "tracing_path.h" 15 15 16 16 static char tracing_mnt[PATH_MAX] = "/sys/kernel/debug"; 17 - static char tracing_path[PATH_MAX] = "/sys/kernel/debug/tracing"; 18 - static char tracing_events_path[PATH_MAX] = "/sys/kernel/debug/tracing/events"; 17 + static char tracing_path[PATH_MAX] = "/sys/kernel/tracing"; 18 + static char tracing_events_path[PATH_MAX] = "/sys/kernel/tracing/events"; 19 19 20 20 static void __tracing_path_set(const char *tracing, const char *mountpoint) 21 21 {
+1 -1
tools/tracing/latency/latency-collector.c
··· 1584 1584 /* 1585 1585 * Toss a coin to decide if we want to sleep before printing 1586 1586 * out the backtrace. The reason for this is that opening 1587 - * /sys/kernel/debug/tracing/trace will cause a blackout of 1587 + * /sys/kernel/tracing/trace will cause a blackout of 1588 1588 * hundreds of ms, where no latencies will be noted by the 1589 1589 * latency tracer. Thus by randomly sleeping we try to avoid 1590 1590 * missing traces systematically due to this. With this option