Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'trace-fixes-v3.16-rc5-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing fixes from Steven Rostedt:
"A few more fixes for ftrace infrastructure.

I was cleaning out my INBOX and found two fixes from zhangwei from a
year ago that were lost in my mail. These fix an inconsistency
between trace_puts() and the way trace_printk() works. The reason
this is important to fix is because when trace_printk() doesn't have
any arguments, it turns into a trace_puts(). Not being able to enable
a stack trace against trace_printk() because it does not have any
arguments is quite confusing. Also, the fix is rather trivial and low
risk.

While porting some changes to PowerPC I discovered that it still has
the function graph tracer filter bug that if you also enable stack
tracing the function graph tracer filter is ignored. I fixed that up.

Finally, Martin Lau, fixed a bug that would cause readers of the
ftrace ring buffer to block forever even though it was suppose to be
NONBLOCK"

This also includes the fix from an earlier pull request:

"Oleg Nesterov fixed a memory leak that happens if a user creates a
tracing instance, sets up a filter in an event, and then removes that
instance. The filter allocates memory that is never freed when the
instance is destroyed"

* tag 'trace-fixes-v3.16-rc5-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ring-buffer: Fix polling on trace_pipe
tracing: Add TRACE_ITER_PRINTK flag check in __trace_puts/__trace_bputs
tracing: Fix graph tracer with stack tracer on other archs
tracing: Add ftrace_trace_stack into __trace_puts/__trace_bputs
tracing: instance_rmdir() leaks ftrace_event_file->filter

+19 -8
+2 -2
kernel/trace/ftrace.c
··· 265 265 func = ftrace_ops_list_func; 266 266 } 267 267 268 + update_function_graph_func(); 269 + 268 270 /* If there's no change, then do nothing more here */ 269 271 if (ftrace_trace_function == func) 270 272 return; 271 - 272 - update_function_graph_func(); 273 273 274 274 /* 275 275 * If we are using the list function, it doesn't care
-4
kernel/trace/ring_buffer.c
··· 616 616 struct ring_buffer_per_cpu *cpu_buffer; 617 617 struct rb_irq_work *work; 618 618 619 - if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || 620 - (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) 621 - return POLLIN | POLLRDNORM; 622 - 623 619 if (cpu == RING_BUFFER_ALL_CPUS) 624 620 work = &buffer->irq_work; 625 621 else {
+16 -2
kernel/trace/trace.c
··· 466 466 struct print_entry *entry; 467 467 unsigned long irq_flags; 468 468 int alloc; 469 + int pc; 470 + 471 + if (!(trace_flags & TRACE_ITER_PRINTK)) 472 + return 0; 473 + 474 + pc = preempt_count(); 469 475 470 476 if (unlikely(tracing_selftest_running || tracing_disabled)) 471 477 return 0; ··· 481 475 local_save_flags(irq_flags); 482 476 buffer = global_trace.trace_buffer.buffer; 483 477 event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, 484 - irq_flags, preempt_count()); 478 + irq_flags, pc); 485 479 if (!event) 486 480 return 0; 487 481 ··· 498 492 entry->buf[size] = '\0'; 499 493 500 494 __buffer_unlock_commit(buffer, event); 495 + ftrace_trace_stack(buffer, irq_flags, 4, pc); 501 496 502 497 return size; 503 498 } ··· 516 509 struct bputs_entry *entry; 517 510 unsigned long irq_flags; 518 511 int size = sizeof(struct bputs_entry); 512 + int pc; 513 + 514 + if (!(trace_flags & TRACE_ITER_PRINTK)) 515 + return 0; 516 + 517 + pc = preempt_count(); 519 518 520 519 if (unlikely(tracing_selftest_running || tracing_disabled)) 521 520 return 0; ··· 529 516 local_save_flags(irq_flags); 530 517 buffer = global_trace.trace_buffer.buffer; 531 518 event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, 532 - irq_flags, preempt_count()); 519 + irq_flags, pc); 533 520 if (!event) 534 521 return 0; 535 522 ··· 538 525 entry->str = str; 539 526 540 527 __buffer_unlock_commit(buffer, event); 528 + ftrace_trace_stack(buffer, irq_flags, 4, pc); 541 529 542 530 return 1; 543 531 }
+1
kernel/trace/trace_events.c
··· 470 470 471 471 list_del(&file->list); 472 472 remove_subsystem(file->system); 473 + free_event_filter(file->filter); 473 474 kmem_cache_free(file_cachep, file); 474 475 } 475 476