Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'trace-v6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

- Fix ring buffer being permanently disabled due to missed
record_disabled()

Changing the trace cpu mask will disable the ring buffers for the
CPUs no longer in the mask. But it fails to update the snapshot
buffer. If a snapshot takes place, the accounting for the ring buffer
being disabled is corrupted and this can lead to the ring buffer
being permanently disabled.

- Add test case for snapshot and cpu mask working together

- Fix memleak by the function graph tracer not getting closed properly.

The iterator is used to read the ring buffer. When it opens, it calls
the open function of a tracer, and when it is closed, it calls the
close iteration. While a trace is being read, it is still possible to
change the tracer.

If this happens between the function graph tracer and the wakeup
tracer (which uses function graph tracing), the tracers are not
closed properly during when the iterator sees the switch, and the
wakeup function did not initialize its private pointer to NULL, which
is used to know if the function graph tracer was the last tracer. It
could be fooled in thinking it is, but then on exit it does not call
the close function of the function graph tracer to clean up its data.

- Fix synthetic events on big endian machines, by introducing a union
that does the conversions properly.

- Fix synthetic events from printing out the number of elements in the
stacktrace when it shouldn't.

- Fix synthetic events stacktrace to not print a bogus value at the
end.

- Introduce a pipe_cpumask that prevents the trace_pipe files from
being opened by more than one task (file descriptor).

There was a race found where if splice is called, the iter->ent could
become stale and events could be missed. There's no point reading a
producer/consumer file by more than one task as they will corrupt
each other anyway. Add a cpumask that keeps track of the per_cpu
trace_pipe files as well as the global trace_pipe file that prevents
more than one open of a trace_pipe file that represents the same ring
buffer. This prevents the race from happening.

- Fix ftrace samples for arm64 to work with older compilers.

* tag 'trace-v6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
samples: ftrace: Replace bti assembly with hint for older compiler
tracing: Introduce pipe_cpumask to avoid race on trace_pipes
tracing: Fix memleak due to race between current_tracer and trace
tracing/synthetic: Allocate one additional element for size
tracing/synthetic: Skip first entry for stack traces
tracing/synthetic: Use union instead of casts
selftests/ftrace: Add a basic testcase for snapshot
tracing: Fix cpu buffers unavailable due to 'record_disabled' missed

+166 -78
+11
include/linux/trace_events.h
··· 59 59 extern __printf(2, 3) 60 60 void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...); 61 61 62 + /* Used to find the offset and length of dynamic fields in trace events */ 63 + struct trace_dynamic_info { 64 + #ifdef CONFIG_CPU_BIG_ENDIAN 65 + u16 offset; 66 + u16 len; 67 + #else 68 + u16 len; 69 + u16 offset; 70 + #endif 71 + }; 72 + 62 73 /* 63 74 * The trace entry - the most basic unit of tracing. This is what 64 75 * is printed in the end as a single line in the trace output, such as:
+62 -8
kernel/trace/trace.c
··· 4213 4213 * will point to the same string as current_trace->name. 4214 4214 */ 4215 4215 mutex_lock(&trace_types_lock); 4216 - if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name)) 4216 + if (unlikely(tr->current_trace && iter->trace->name != tr->current_trace->name)) { 4217 + /* Close iter->trace before switching to the new current tracer */ 4218 + if (iter->trace->close) 4219 + iter->trace->close(iter); 4217 4220 *iter->trace = *tr->current_trace; 4221 + /* Reopen the new current tracer */ 4222 + if (iter->trace->open) 4223 + iter->trace->open(iter); 4224 + } 4218 4225 mutex_unlock(&trace_types_lock); 4219 4226 4220 4227 #ifdef CONFIG_TRACER_MAX_TRACE ··· 5284 5277 !cpumask_test_cpu(cpu, tracing_cpumask_new)) { 5285 5278 atomic_inc(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled); 5286 5279 ring_buffer_record_disable_cpu(tr->array_buffer.buffer, cpu); 5280 + #ifdef CONFIG_TRACER_MAX_TRACE 5281 + ring_buffer_record_disable_cpu(tr->max_buffer.buffer, cpu); 5282 + #endif 5287 5283 } 5288 5284 if (!cpumask_test_cpu(cpu, tr->tracing_cpumask) && 5289 5285 cpumask_test_cpu(cpu, tracing_cpumask_new)) { 5290 5286 atomic_dec(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled); 5291 5287 ring_buffer_record_enable_cpu(tr->array_buffer.buffer, cpu); 5288 + #ifdef CONFIG_TRACER_MAX_TRACE 5289 + ring_buffer_record_enable_cpu(tr->max_buffer.buffer, cpu); 5290 + #endif 5292 5291 } 5293 5292 } 5294 5293 arch_spin_unlock(&tr->max_lock); ··· 6718 6705 6719 6706 #endif 6720 6707 6708 + static int open_pipe_on_cpu(struct trace_array *tr, int cpu) 6709 + { 6710 + if (cpu == RING_BUFFER_ALL_CPUS) { 6711 + if (cpumask_empty(tr->pipe_cpumask)) { 6712 + cpumask_setall(tr->pipe_cpumask); 6713 + return 0; 6714 + } 6715 + } else if (!cpumask_test_cpu(cpu, tr->pipe_cpumask)) { 6716 + cpumask_set_cpu(cpu, tr->pipe_cpumask); 6717 + return 0; 6718 + } 6719 + return -EBUSY; 6720 + } 6721 + 6722 + static void close_pipe_on_cpu(struct trace_array *tr, int cpu) 6723 + { 6724 + if (cpu == RING_BUFFER_ALL_CPUS) { 6725 + WARN_ON(!cpumask_full(tr->pipe_cpumask)); 6726 + cpumask_clear(tr->pipe_cpumask); 6727 + } else { 6728 + WARN_ON(!cpumask_test_cpu(cpu, tr->pipe_cpumask)); 6729 + cpumask_clear_cpu(cpu, tr->pipe_cpumask); 6730 + } 6731 + } 6732 + 6721 6733 static int tracing_open_pipe(struct inode *inode, struct file *filp) 6722 6734 { 6723 6735 struct trace_array *tr = inode->i_private; 6724 6736 struct trace_iterator *iter; 6737 + int cpu; 6725 6738 int ret; 6726 6739 6727 6740 ret = tracing_check_open_get_tr(tr); ··· 6755 6716 return ret; 6756 6717 6757 6718 mutex_lock(&trace_types_lock); 6719 + cpu = tracing_get_cpu(inode); 6720 + ret = open_pipe_on_cpu(tr, cpu); 6721 + if (ret) 6722 + goto fail_pipe_on_cpu; 6758 6723 6759 6724 /* create a buffer to store the information to pass to userspace */ 6760 6725 iter = kzalloc(sizeof(*iter), GFP_KERNEL); 6761 6726 if (!iter) { 6762 6727 ret = -ENOMEM; 6763 - __trace_array_put(tr); 6764 - goto out; 6728 + goto fail_alloc_iter; 6765 6729 } 6766 6730 6767 6731 trace_seq_init(&iter->seq); ··· 6787 6745 6788 6746 iter->tr = tr; 6789 6747 iter->array_buffer = &tr->array_buffer; 6790 - iter->cpu_file = tracing_get_cpu(inode); 6748 + iter->cpu_file = cpu; 6791 6749 mutex_init(&iter->mutex); 6792 6750 filp->private_data = iter; 6793 6751 ··· 6797 6755 nonseekable_open(inode, filp); 6798 6756 6799 6757 tr->trace_ref++; 6800 - out: 6758 + 6801 6759 mutex_unlock(&trace_types_lock); 6802 6760 return ret; 6803 6761 6804 6762 fail: 6805 6763 kfree(iter); 6764 + fail_alloc_iter: 6765 + close_pipe_on_cpu(tr, cpu); 6766 + fail_pipe_on_cpu: 6806 6767 __trace_array_put(tr); 6807 6768 mutex_unlock(&trace_types_lock); 6808 6769 return ret; ··· 6822 6777 6823 6778 if (iter->trace->pipe_close) 6824 6779 iter->trace->pipe_close(iter); 6825 - 6780 + close_pipe_on_cpu(tr, iter->cpu_file); 6826 6781 mutex_unlock(&trace_types_lock); 6827 6782 6828 6783 free_cpumask_var(iter->started); ··· 9486 9441 if (!alloc_cpumask_var(&tr->tracing_cpumask, GFP_KERNEL)) 9487 9442 goto out_free_tr; 9488 9443 9444 + if (!alloc_cpumask_var(&tr->pipe_cpumask, GFP_KERNEL)) 9445 + goto out_free_tr; 9446 + 9489 9447 tr->trace_flags = global_trace.trace_flags & ~ZEROED_TRACE_FLAGS; 9490 9448 9491 9449 cpumask_copy(tr->tracing_cpumask, cpu_all_mask); ··· 9530 9482 out_free_tr: 9531 9483 ftrace_free_ftrace_ops(tr); 9532 9484 free_trace_buffers(tr); 9485 + free_cpumask_var(tr->pipe_cpumask); 9533 9486 free_cpumask_var(tr->tracing_cpumask); 9534 9487 kfree(tr->name); 9535 9488 kfree(tr); ··· 9633 9584 } 9634 9585 kfree(tr->topts); 9635 9586 9587 + free_cpumask_var(tr->pipe_cpumask); 9636 9588 free_cpumask_var(tr->tracing_cpumask); 9637 9589 kfree(tr->name); 9638 9590 kfree(tr); ··· 10431 10381 if (trace_create_savedcmd() < 0) 10432 10382 goto out_free_temp_buffer; 10433 10383 10384 + if (!alloc_cpumask_var(&global_trace.pipe_cpumask, GFP_KERNEL)) 10385 + goto out_free_savedcmd; 10386 + 10434 10387 /* TODO: make the number of buffers hot pluggable with CPUS */ 10435 10388 if (allocate_trace_buffers(&global_trace, ring_buf_size) < 0) { 10436 10389 MEM_FAIL(1, "tracer: failed to allocate ring buffer!\n"); 10437 - goto out_free_savedcmd; 10390 + goto out_free_pipe_cpumask; 10438 10391 } 10439 - 10440 10392 if (global_trace.buffer_disabled) 10441 10393 tracing_off(); 10442 10394 ··· 10491 10439 10492 10440 return 0; 10493 10441 10442 + out_free_pipe_cpumask: 10443 + free_cpumask_var(global_trace.pipe_cpumask); 10494 10444 out_free_savedcmd: 10495 10445 free_saved_cmdlines_buffer(savedcmd); 10496 10446 out_free_temp_buffer:
+10
kernel/trace/trace.h
··· 377 377 struct list_head events; 378 378 struct trace_event_file *trace_marker_file; 379 379 cpumask_var_t tracing_cpumask; /* only trace on set CPUs */ 380 + /* one per_cpu trace_pipe can be opened by only one user */ 381 + cpumask_var_t pipe_cpumask; 380 382 int ref; 381 383 int trace_ref; 382 384 #ifdef CONFIG_FUNCTION_TRACER ··· 1296 1294 1297 1295 /* set ring buffers to default size if not already done so */ 1298 1296 int tracing_update_buffers(void); 1297 + 1298 + union trace_synth_field { 1299 + u8 as_u8; 1300 + u16 as_u16; 1301 + u32 as_u32; 1302 + u64 as_u64; 1303 + struct trace_dynamic_info as_dynamic; 1304 + }; 1299 1305 1300 1306 struct ftrace_event_field { 1301 1307 struct list_head link;
+41 -62
kernel/trace/trace_events_synth.c
··· 127 127 128 128 struct synth_trace_event { 129 129 struct trace_entry ent; 130 - u64 fields[]; 130 + union trace_synth_field fields[]; 131 131 }; 132 132 133 133 static int synth_event_define_fields(struct trace_event_call *call) ··· 321 321 322 322 static void print_synth_event_num_val(struct trace_seq *s, 323 323 char *print_fmt, char *name, 324 - int size, u64 val, char *space) 324 + int size, union trace_synth_field *val, char *space) 325 325 { 326 326 switch (size) { 327 327 case 1: 328 - trace_seq_printf(s, print_fmt, name, (u8)val, space); 328 + trace_seq_printf(s, print_fmt, name, val->as_u8, space); 329 329 break; 330 330 331 331 case 2: 332 - trace_seq_printf(s, print_fmt, name, (u16)val, space); 332 + trace_seq_printf(s, print_fmt, name, val->as_u16, space); 333 333 break; 334 334 335 335 case 4: 336 - trace_seq_printf(s, print_fmt, name, (u32)val, space); 336 + trace_seq_printf(s, print_fmt, name, val->as_u32, space); 337 337 break; 338 338 339 339 default: ··· 350 350 struct trace_seq *s = &iter->seq; 351 351 struct synth_trace_event *entry; 352 352 struct synth_event *se; 353 - unsigned int i, n_u64; 353 + unsigned int i, j, n_u64; 354 354 char print_fmt[32]; 355 355 const char *fmt; 356 356 ··· 374 374 /* parameter values */ 375 375 if (se->fields[i]->is_string) { 376 376 if (se->fields[i]->is_dynamic) { 377 - u32 offset, data_offset; 378 - char *str_field; 379 - 380 - offset = (u32)entry->fields[n_u64]; 381 - data_offset = offset & 0xffff; 382 - 383 - str_field = (char *)entry + data_offset; 377 + union trace_synth_field *data = &entry->fields[n_u64]; 384 378 385 379 trace_seq_printf(s, print_fmt, se->fields[i]->name, 386 380 STR_VAR_LEN_MAX, 387 - str_field, 381 + (char *)entry + data->as_dynamic.offset, 388 382 i == se->n_fields - 1 ? "" : " "); 389 383 n_u64++; 390 384 } else { 391 385 trace_seq_printf(s, print_fmt, se->fields[i]->name, 392 386 STR_VAR_LEN_MAX, 393 - (char *)&entry->fields[n_u64], 387 + (char *)&entry->fields[n_u64].as_u64, 394 388 i == se->n_fields - 1 ? "" : " "); 395 389 n_u64 += STR_VAR_LEN_MAX / sizeof(u64); 396 390 } 397 391 } else if (se->fields[i]->is_stack) { 398 - u32 offset, data_offset, len; 399 - unsigned long *p, *end; 400 - 401 - offset = (u32)entry->fields[n_u64]; 402 - data_offset = offset & 0xffff; 403 - len = offset >> 16; 404 - 405 - p = (void *)entry + data_offset; 406 - end = (void *)p + len - (sizeof(long) - 1); 392 + union trace_synth_field *data = &entry->fields[n_u64]; 393 + unsigned long *p = (void *)entry + data->as_dynamic.offset; 407 394 408 395 trace_seq_printf(s, "%s=STACK:\n", se->fields[i]->name); 409 - 410 - for (; *p && p < end; p++) 411 - trace_seq_printf(s, "=> %pS\n", (void *)*p); 396 + for (j = 1; j < data->as_dynamic.len / sizeof(long); j++) 397 + trace_seq_printf(s, "=> %pS\n", (void *)p[j]); 412 398 n_u64++; 413 - 414 399 } else { 415 400 struct trace_print_flags __flags[] = { 416 401 __def_gfpflag_names, {-1, NULL} }; ··· 404 419 print_synth_event_num_val(s, print_fmt, 405 420 se->fields[i]->name, 406 421 se->fields[i]->size, 407 - entry->fields[n_u64], 422 + &entry->fields[n_u64], 408 423 space); 409 424 410 425 if (strcmp(se->fields[i]->type, "gfp_t") == 0) { 411 426 trace_seq_puts(s, " ("); 412 427 trace_print_flags_seq(s, "|", 413 - entry->fields[n_u64], 428 + entry->fields[n_u64].as_u64, 414 429 __flags); 415 430 trace_seq_putc(s, ')'); 416 431 } ··· 439 454 int ret; 440 455 441 456 if (is_dynamic) { 442 - u32 data_offset; 457 + union trace_synth_field *data = &entry->fields[*n_u64]; 443 458 444 - data_offset = struct_size(entry, fields, event->n_u64); 445 - data_offset += data_size; 446 - 447 - len = fetch_store_strlen((unsigned long)str_val); 448 - 449 - data_offset |= len << 16; 450 - *(u32 *)&entry->fields[*n_u64] = data_offset; 459 + data->as_dynamic.offset = struct_size(entry, fields, event->n_u64) + data_size; 460 + data->as_dynamic.len = fetch_store_strlen((unsigned long)str_val); 451 461 452 462 ret = fetch_store_string((unsigned long)str_val, &entry->fields[*n_u64], entry); 453 463 454 464 (*n_u64)++; 455 465 } else { 456 - str_field = (char *)&entry->fields[*n_u64]; 466 + str_field = (char *)&entry->fields[*n_u64].as_u64; 457 467 458 468 #ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE 459 469 if ((unsigned long)str_val < TASK_SIZE) ··· 472 492 unsigned int data_size, 473 493 unsigned int *n_u64) 474 494 { 495 + union trace_synth_field *data = &entry->fields[*n_u64]; 475 496 unsigned int len; 476 497 u32 data_offset; 477 498 void *data_loc; ··· 485 504 break; 486 505 } 487 506 488 - /* Include the zero'd element if it fits */ 489 - if (len < HIST_STACKTRACE_DEPTH) 490 - len++; 491 - 492 507 len *= sizeof(long); 493 508 494 509 /* Find the dynamic section to copy the stack into. */ ··· 492 515 memcpy(data_loc, stack, len); 493 516 494 517 /* Fill in the field that holds the offset/len combo */ 495 - data_offset |= len << 16; 496 - *(u32 *)&entry->fields[*n_u64] = data_offset; 518 + 519 + data->as_dynamic.offset = data_offset; 520 + data->as_dynamic.len = len; 497 521 498 522 (*n_u64)++; 499 523 ··· 528 550 str_val = (char *)(long)var_ref_vals[val_idx]; 529 551 530 552 if (event->dynamic_fields[i]->is_stack) { 531 - len = *((unsigned long *)str_val); 553 + /* reserve one extra element for size */ 554 + len = *((unsigned long *)str_val) + 1; 532 555 len *= sizeof(unsigned long); 533 556 } else { 534 557 len = fetch_store_strlen((unsigned long)str_val); ··· 571 592 572 593 switch (field->size) { 573 594 case 1: 574 - *(u8 *)&entry->fields[n_u64] = (u8)val; 595 + entry->fields[n_u64].as_u8 = (u8)val; 575 596 break; 576 597 577 598 case 2: 578 - *(u16 *)&entry->fields[n_u64] = (u16)val; 599 + entry->fields[n_u64].as_u16 = (u16)val; 579 600 break; 580 601 581 602 case 4: 582 - *(u32 *)&entry->fields[n_u64] = (u32)val; 603 + entry->fields[n_u64].as_u32 = (u32)val; 583 604 break; 584 605 585 606 default: 586 - entry->fields[n_u64] = val; 607 + entry->fields[n_u64].as_u64 = val; 587 608 break; 588 609 } 589 610 n_u64++; ··· 1770 1791 1771 1792 switch (field->size) { 1772 1793 case 1: 1773 - *(u8 *)&state.entry->fields[n_u64] = (u8)val; 1794 + state.entry->fields[n_u64].as_u8 = (u8)val; 1774 1795 break; 1775 1796 1776 1797 case 2: 1777 - *(u16 *)&state.entry->fields[n_u64] = (u16)val; 1798 + state.entry->fields[n_u64].as_u16 = (u16)val; 1778 1799 break; 1779 1800 1780 1801 case 4: 1781 - *(u32 *)&state.entry->fields[n_u64] = (u32)val; 1802 + state.entry->fields[n_u64].as_u32 = (u32)val; 1782 1803 break; 1783 1804 1784 1805 default: 1785 - state.entry->fields[n_u64] = val; 1806 + state.entry->fields[n_u64].as_u64 = val; 1786 1807 break; 1787 1808 } 1788 1809 n_u64++; ··· 1863 1884 1864 1885 switch (field->size) { 1865 1886 case 1: 1866 - *(u8 *)&state.entry->fields[n_u64] = (u8)val; 1887 + state.entry->fields[n_u64].as_u8 = (u8)val; 1867 1888 break; 1868 1889 1869 1890 case 2: 1870 - *(u16 *)&state.entry->fields[n_u64] = (u16)val; 1891 + state.entry->fields[n_u64].as_u16 = (u16)val; 1871 1892 break; 1872 1893 1873 1894 case 4: 1874 - *(u32 *)&state.entry->fields[n_u64] = (u32)val; 1895 + state.entry->fields[n_u64].as_u32 = (u32)val; 1875 1896 break; 1876 1897 1877 1898 default: 1878 - state.entry->fields[n_u64] = val; 1899 + state.entry->fields[n_u64].as_u64 = val; 1879 1900 break; 1880 1901 } 1881 1902 n_u64++; ··· 2010 2031 } else { 2011 2032 switch (field->size) { 2012 2033 case 1: 2013 - *(u8 *)&trace_state->entry->fields[field->offset] = (u8)val; 2034 + trace_state->entry->fields[field->offset].as_u8 = (u8)val; 2014 2035 break; 2015 2036 2016 2037 case 2: 2017 - *(u16 *)&trace_state->entry->fields[field->offset] = (u16)val; 2038 + trace_state->entry->fields[field->offset].as_u16 = (u16)val; 2018 2039 break; 2019 2040 2020 2041 case 4: 2021 - *(u32 *)&trace_state->entry->fields[field->offset] = (u32)val; 2042 + trace_state->entry->fields[field->offset].as_u32 = (u32)val; 2022 2043 break; 2023 2044 2024 2045 default: 2025 - trace_state->entry->fields[field->offset] = val; 2046 + trace_state->entry->fields[field->offset].as_u64 = val; 2026 2047 break; 2027 2048 } 2028 2049 }
+2 -1
kernel/trace/trace_irqsoff.c
··· 231 231 { 232 232 if (is_graph(iter->tr)) 233 233 graph_trace_open(iter); 234 - 234 + else 235 + iter->private = NULL; 235 236 } 236 237 237 238 static void irqsoff_trace_close(struct trace_iterator *iter)
+2
kernel/trace/trace_sched_wakeup.c
··· 168 168 { 169 169 if (is_graph(iter->tr)) 170 170 graph_trace_open(iter); 171 + else 172 + iter->private = NULL; 171 173 } 172 174 173 175 static void wakeup_trace_close(struct trace_iterator *iter)
+2 -2
samples/ftrace/ftrace-direct-modify.c
··· 105 105 " .type my_tramp1, @function\n" 106 106 " .globl my_tramp1\n" 107 107 " my_tramp1:" 108 - " bti c\n" 108 + " hint 34\n" // bti c 109 109 " sub sp, sp, #16\n" 110 110 " stp x9, x30, [sp]\n" 111 111 " bl my_direct_func1\n" ··· 117 117 " .type my_tramp2, @function\n" 118 118 " .globl my_tramp2\n" 119 119 " my_tramp2:" 120 - " bti c\n" 120 + " hint 34\n" // bti c 121 121 " sub sp, sp, #16\n" 122 122 " stp x9, x30, [sp]\n" 123 123 " bl my_direct_func2\n"
+2 -2
samples/ftrace/ftrace-direct-multi-modify.c
··· 112 112 " .type my_tramp1, @function\n" 113 113 " .globl my_tramp1\n" 114 114 " my_tramp1:" 115 - " bti c\n" 115 + " hint 34\n" // bti c 116 116 " sub sp, sp, #32\n" 117 117 " stp x9, x30, [sp]\n" 118 118 " str x0, [sp, #16]\n" ··· 127 127 " .type my_tramp2, @function\n" 128 128 " .globl my_tramp2\n" 129 129 " my_tramp2:" 130 - " bti c\n" 130 + " hint 34\n" // bti c 131 131 " sub sp, sp, #32\n" 132 132 " stp x9, x30, [sp]\n" 133 133 " str x0, [sp, #16]\n"
+1 -1
samples/ftrace/ftrace-direct-multi.c
··· 75 75 " .type my_tramp, @function\n" 76 76 " .globl my_tramp\n" 77 77 " my_tramp:" 78 - " bti c\n" 78 + " hint 34\n" // bti c 79 79 " sub sp, sp, #32\n" 80 80 " stp x9, x30, [sp]\n" 81 81 " str x0, [sp, #16]\n"
+1 -1
samples/ftrace/ftrace-direct-too.c
··· 81 81 " .type my_tramp, @function\n" 82 82 " .globl my_tramp\n" 83 83 " my_tramp:" 84 - " bti c\n" 84 + " hint 34\n" // bti c 85 85 " sub sp, sp, #48\n" 86 86 " stp x9, x30, [sp]\n" 87 87 " stp x0, x1, [sp, #16]\n"
+1 -1
samples/ftrace/ftrace-direct.c
··· 72 72 " .type my_tramp, @function\n" 73 73 " .globl my_tramp\n" 74 74 " my_tramp:" 75 - " bti c\n" 75 + " hint 34\n" // bti c 76 76 " sub sp, sp, #32\n" 77 77 " stp x9, x30, [sp]\n" 78 78 " str x0, [sp, #16]\n"
+31
tools/testing/selftests/ftrace/test.d/00basic/snapshot1.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: Snapshot and tracing_cpumask 4 + # requires: trace_marker tracing_cpumask snapshot 5 + # flags: instance 6 + 7 + # This testcase is constrived to reproduce a problem that the cpu buffers 8 + # become unavailable which is due to 'record_disabled' of array_buffer and 9 + # max_buffer being messed up. 10 + 11 + # Store origin cpumask 12 + ORIG_CPUMASK=`cat tracing_cpumask` 13 + 14 + # Stop tracing all cpu 15 + echo 0 > tracing_cpumask 16 + 17 + # Take a snapshot of the main buffer 18 + echo 1 > snapshot 19 + 20 + # Restore origin cpumask, note that there should be some cpus being traced 21 + echo ${ORIG_CPUMASK} > tracing_cpumask 22 + 23 + # Set tracing on 24 + echo 1 > tracing_on 25 + 26 + # Write a log into buffer 27 + echo "test input 1" > trace_marker 28 + 29 + # Ensure the log writed so that cpu buffers are still available 30 + grep -q "test input 1" trace 31 + exit 0