Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'trace-v6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

- Fix accounting of stop_count in file release

On opening the trace file, if "pause-on-trace" option is set, it will
increment the stop_count. On file release, it checks if stop_count is
set, and if so it decrements it. Since this code was originally
written, the stop_count can be incremented by other use cases. This
makes just checking the stop_count not enough to know if it should be
decremented.

Add a new iterator flag called "PAUSE" and have it set if the open
disables tracing and only decrement the stop_count if that flag is
set on close.

- Remove length field in trace_seq_printf() of print_synth_event()

When printing the synthetic event that has a static length array
field, the vsprintf() of the trace_seq_printf() triggered a
"(efault)" in the output. That's because the print_fmt replaced the
"%.*s" with "%s" causing the arguments to be off.

- Fix a bunch of typos

* tag 'trace-v6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing: Fix typo in trace_seq.c
tracing: Fix typo in trace_probe.c
tracing: Fix multiple typos in trace_osnoise.c
tracing: Fix multiple typos in trace_events_user.c
tracing: Fix typo in trace_events_trigger.c
tracing: Fix typo in trace_events_hist.c
tracing: Fix typo in trace_events_filter.c
tracing: Fix multiple typos in trace_events.c
tracing: Fix multiple typos in trace.c
tracing: Fix typo in ring_buffer_benchmark.c
tracing: Fix multiple typos in ring_buffer.c
tracing: Fix typo in fprobe.c
tracing: Fix typo in fpgraph.c
tracing: Fix fixed array of synthetic event
tracing: Fix enabling of tracing on file release

+34 -32
+1
include/linux/trace_events.h
··· 138 138 TRACE_FILE_LAT_FMT = 1, 139 139 TRACE_FILE_ANNOTATE = 2, 140 140 TRACE_FILE_TIME_IN_NS = 4, 141 + TRACE_FILE_PAUSE = 8, 141 142 }; 142 143 143 144
+1 -1
kernel/trace/fgraph.c
··· 163 163 #define RET_STACK(t, offset) ((struct ftrace_ret_stack *)(&(t)->ret_stack[offset])) 164 164 165 165 /* 166 - * Each fgraph_ops has a reservered unsigned long at the end (top) of the 166 + * Each fgraph_ops has a reserved unsigned long at the end (top) of the 167 167 * ret_stack to store task specific state. 168 168 */ 169 169 #define SHADOW_STACK_TASK_VARS(ret_stack) \
+1 -1
kernel/trace/fprobe.c
··· 30 30 * fprobe_table: hold 'fprobe_hlist::hlist' for checking the fprobe still 31 31 * exists. The key is the address of fprobe instance. 32 32 * fprobe_ip_table: hold 'fprobe_hlist::array[*]' for searching the fprobe 33 - * instance related to the funciton address. The key is the ftrace IP 33 + * instance related to the function address. The key is the ftrace IP 34 34 * address. 35 35 * 36 36 * When unregistering the fprobe, fprobe_hlist::fp and fprobe_hlist::array[*].fp
+3 -3
kernel/trace/ring_buffer.c
··· 1770 1770 bmeta->total_size = total_size; 1771 1771 bmeta->buffers_offset = (void *)ptr - (void *)bmeta; 1772 1772 1773 - /* Zero out the scatch pad */ 1773 + /* Zero out the scratch pad */ 1774 1774 memset((void *)bmeta + sizeof(*bmeta), 0, bmeta->buffers_offset - sizeof(*bmeta)); 1775 1775 1776 1776 return false; ··· 6089 6089 * id field, and updated via this function. 6090 6090 * 6091 6091 * But for a fixed memory mapped buffer, the id is already assigned for 6092 - * fixed memory ording in the memory layout and can not be used. Instead 6092 + * fixed memory ordering in the memory layout and can not be used. Instead 6093 6093 * the index of where the page lies in the memory layout is used. 6094 6094 * 6095 6095 * For the normal pages, set the buffer page id with the passed in @id ··· 7669 7669 /* 7670 7670 * Show buffer is enabled before setting rb_test_started. 7671 7671 * Yes there's a small race window where events could be 7672 - * dropped and the thread wont catch it. But when a ring 7672 + * dropped and the thread won't catch it. But when a ring 7673 7673 * buffer gets enabled, there will always be some kind of 7674 7674 * delay before other CPUs see it. Thus, we don't care about 7675 7675 * those dropped events. We care about events dropped after
+1 -1
kernel/trace/ring_buffer_benchmark.c
··· 433 433 { 434 434 int ret; 435 435 436 - /* make a one meg buffer in overwite mode */ 436 + /* make a one meg buffer in overwrite mode */ 437 437 buffer = ring_buffer_alloc(1000000, RB_FL_OVERWRITE); 438 438 if (!buffer) 439 439 return -ENOMEM;
+9 -7
kernel/trace/trace.c
··· 125 125 * If there is an oops (or kernel panic) and the ftrace_dump_on_oops 126 126 * is set, then ftrace_dump is called. This will output the contents 127 127 * of the ftrace buffers to the console. This is very useful for 128 - * capturing traces that lead to crashes and outputing it to a 128 + * capturing traces that lead to crashes and outputting it to a 129 129 * serial console. 130 130 * 131 131 * It is default off, but you can enable it with either specifying ··· 134 134 * Set 1 if you want to dump buffers of all CPUs 135 135 * Set 2 if you want to dump the buffer of the CPU that triggered oops 136 136 * Set instance name if you want to dump the specific trace instance 137 - * Multiple instance dump is also supported, and instances are seperated 137 + * Multiple instance dump is also supported, and instances are separated 138 138 * by commas. 139 139 */ 140 140 /* Set to string format zero to disable by default */ ··· 4709 4709 * If pause-on-trace is enabled, then stop the trace while 4710 4710 * dumping, unless this is the "snapshot" file 4711 4711 */ 4712 - if (!iter->snapshot && (tr->trace_flags & TRACE_ITER(PAUSE_ON_TRACE))) 4712 + if (!iter->snapshot && (tr->trace_flags & TRACE_ITER(PAUSE_ON_TRACE))) { 4713 + iter->iter_flags |= TRACE_FILE_PAUSE; 4713 4714 tracing_stop_tr(tr); 4715 + } 4714 4716 4715 4717 if (iter->cpu_file == RING_BUFFER_ALL_CPUS) { 4716 4718 for_each_tracing_cpu(cpu) { ··· 4844 4842 if (iter->trace && iter->trace->close) 4845 4843 iter->trace->close(iter); 4846 4844 4847 - if (!iter->snapshot && tr->stop_count) 4845 + if (iter->iter_flags & TRACE_FILE_PAUSE) 4848 4846 /* reenable tracing if it was previously enabled */ 4849 4847 tracing_start_tr(tr); 4850 4848 ··· 5278 5276 return -EINVAL; 5279 5277 /* 5280 5278 * An instance must always have it set. 5281 - * by default, that's the global_trace instane. 5279 + * by default, that's the global_trace instance. 5282 5280 */ 5283 5281 if (printk_trace == tr) 5284 5282 update_printk_trace(&global_trace); ··· 7556 7554 migrate_disable(); 7557 7555 7558 7556 /* 7559 - * Now preemption is being enabed and another task can come in 7557 + * Now preemption is being enabled and another task can come in 7560 7558 * and use the same buffer and corrupt our data. 7561 7559 */ 7562 7560 preempt_enable_notrace(); ··· 11331 11329 /* 11332 11330 * When allocate_snapshot is set, the next call to 11333 11331 * allocate_trace_buffers() (called by trace_array_get_by_name()) 11334 - * will allocate the snapshot buffer. That will alse clear 11332 + * will allocate the snapshot buffer. That will also clear 11335 11333 * this flag. 11336 11334 */ 11337 11335 allocate_snapshot = true;
+4 -4
kernel/trace/trace_events.c
··· 360 360 /* Anything else, this isn't a function */ 361 361 break; 362 362 } 363 - /* A function could be wrapped in parethesis, try the next one */ 363 + /* A function could be wrapped in parenthesis, try the next one */ 364 364 s = r + 1; 365 365 } while (s < e); 366 366 ··· 567 567 * If start_arg is zero, then this is the start of the 568 568 * first argument. The processing of the argument happens 569 569 * when the end of the argument is found, as it needs to 570 - * handle paranthesis and such. 570 + * handle parenthesis and such. 571 571 */ 572 572 if (!start_arg) { 573 573 start_arg = i; ··· 785 785 * 786 786 * When soft_disable is not set but the soft_mode is, 787 787 * we do nothing. Do not disable the tracepoint, otherwise 788 - * "soft enable"s (clearing the SOFT_DISABLED bit) wont work. 788 + * "soft enable"s (clearing the SOFT_DISABLED bit) won't work. 789 789 */ 790 790 if (soft_disable) { 791 791 if (atomic_dec_return(&file->sm_ref) > 0) ··· 1394 1394 if (!tr) 1395 1395 return -ENOENT; 1396 1396 1397 - /* Modules events can be appened with :mod:<module> */ 1397 + /* Modules events can be appended with :mod:<module> */ 1398 1398 mod = strstr(buf, ":mod:"); 1399 1399 if (mod) { 1400 1400 *mod = '\0';
+1 -1
kernel/trace/trace_events_filter.c
··· 142 142 } 143 143 144 144 /** 145 - * struct prog_entry - a singe entry in the filter program 145 + * struct prog_entry - a single entry in the filter program 146 146 * @target: Index to jump to on a branch (actually one minus the index) 147 147 * @when_to_branch: The value of the result of the predicate to do a branch 148 148 * @pred: The predicate to execute.
+1 -1
kernel/trace/trace_events_hist.c
··· 5283 5283 * on the stack, so when the histogram trigger is initialized 5284 5284 * a percpu array of 4 hist_pad structures is allocated. 5285 5285 * This will cover every context from normal, softirq, irq and NMI 5286 - * in the very unlikely event that a tigger happens at each of 5286 + * in the very unlikely event that a trigger happens at each of 5287 5287 * these contexts and interrupts a currently active trigger. 5288 5288 */ 5289 5289 struct hist_pad {
-1
kernel/trace/trace_events_synth.c
··· 375 375 n_u64++; 376 376 } else { 377 377 trace_seq_printf(s, print_fmt, se->fields[i]->name, 378 - STR_VAR_LEN_MAX, 379 378 (char *)&entry->fields[n_u64].as_u64, 380 379 i == se->n_fields - 1 ? "" : " "); 381 380 n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
+1 -1
kernel/trace/trace_events_trigger.c
··· 732 732 * param - text following cmd and ':' and stripped of filter 733 733 * filter - the optional filter text following (and including) 'if' 734 734 * 735 - * To illustrate the use of these componenents, here are some concrete 735 + * To illustrate the use of these components, here are some concrete 736 736 * examples. For the following triggers: 737 737 * 738 738 * echo 'traceon:5 if pid == 0' > trigger
+3 -3
kernel/trace/trace_events_user.c
··· 1041 1041 1042 1042 static int user_field_size(const char *type) 1043 1043 { 1044 - /* long is not allowed from a user, since it's ambigious in size */ 1044 + /* long is not allowed from a user, since it's ambiguous in size */ 1045 1045 if (strcmp(type, "s64") == 0) 1046 1046 return sizeof(s64); 1047 1047 if (strcmp(type, "u64") == 0) ··· 1079 1079 if (str_has_prefix(type, "__rel_loc ")) 1080 1080 return sizeof(u32); 1081 1081 1082 - /* Uknown basic type, error */ 1082 + /* Unknown basic type, error */ 1083 1083 return -EINVAL; 1084 1084 } 1085 1085 ··· 2465 2465 /* 2466 2466 * Prevent users from using the same address and bit multiple times 2467 2467 * within the same mm address space. This can cause unexpected behavior 2468 - * for user processes that is far easier to debug if this is explictly 2468 + * for user processes that is far easier to debug if this is explicitly 2469 2469 * an error upon registering. 2470 2470 */ 2471 2471 if (current_user_event_enabler_exists((unsigned long)reg.enable_addr,
+6 -6
kernel/trace/trace_osnoise.c
··· 329 329 u64 print_stack; /* print IRQ stack if total > */ 330 330 int timerlat_tracer; /* timerlat tracer */ 331 331 #endif 332 - bool tainted; /* infor users and developers about a problem */ 332 + bool tainted; /* info users and developers about a problem */ 333 333 } osnoise_data = { 334 334 .sample_period = DEFAULT_SAMPLE_PERIOD, 335 335 .sample_runtime = DEFAULT_SAMPLE_RUNTIME, ··· 738 738 /* 739 739 * get_int_safe_duration - Get the duration of a window 740 740 * 741 - * The irq, softirq and thread varaibles need to have its duration without 741 + * The irq, softirq and thread variables need to have its duration without 742 742 * the interference from higher priority interrupts. Instead of keeping a 743 743 * variable to discount the interrupt interference from these variables, the 744 744 * starting time of these variables are pushed forward with the interrupt's ··· 1460 1460 stop_in = osnoise_data.stop_tracing * NSEC_PER_USEC; 1461 1461 1462 1462 /* 1463 - * Start timestemp 1463 + * Start timestamp 1464 1464 */ 1465 1465 start = time_get(); 1466 1466 ··· 1881 1881 tlat->kthread = current; 1882 1882 osn_var->pid = current->pid; 1883 1883 /* 1884 - * Anotate the arrival time. 1884 + * Annotate the arrival time. 1885 1885 */ 1886 1886 tlat->abs_period = hrtimer_cb_get_time(&tlat->timer); 1887 1887 ··· 1978 1978 } 1979 1979 1980 1980 /* 1981 - * start_kthread - Start a workload tread 1981 + * start_kthread - Start a workload thread 1982 1982 */ 1983 1983 static int start_kthread(unsigned int cpu) 1984 1984 { ··· 2705 2705 * Why not using tracing instance per_cpu/ dir? 2706 2706 * 2707 2707 * Because osnoise/timerlat have a single workload, having 2708 - * multiple files like these are wast of memory. 2708 + * multiple files like these are waste of memory. 2709 2709 */ 2710 2710 per_cpu = tracefs_create_dir("per_cpu", top_dir); 2711 2711 if (!per_cpu)
+1 -1
kernel/trace/trace_probe.c
··· 517 517 } 518 518 } 519 519 520 - /* Return 1 if the field separater is arrow operator ('->') */ 520 + /* Return 1 if the field separator is arrow operator ('->') */ 521 521 static int split_next_field(char *varname, char **next_field, 522 522 struct traceprobe_parse_context *ctx) 523 523 {
+1 -1
kernel/trace/trace_seq.c
··· 15 15 * 16 16 * A write to the buffer will either succeed or fail. That is, unlike 17 17 * sprintf() there will not be a partial write (well it may write into 18 - * the buffer but it wont update the pointers). This allows users to 18 + * the buffer but it won't update the pointers). This allows users to 19 19 * try to write something into the trace_seq buffer and if it fails 20 20 * they can flush it and try again. 21 21 *