Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'trace-v6.14-3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing updates from Steven Rostedt:

- Cleanup with guard() and free() helpers

There were several places in the code that had a lot of "goto out" in
the error paths to either unlock a lock or free some memory that was
allocated. But this is error prone. Convert the code over to use the
guard() and free() helpers that let the compiler unlock locks or free
memory when the function exits.

- Update the Rust tracepoint code to use the C code too

There was some duplication of the tracepoint code for Rust that did
the same logic as the C code. Add a helper that makes it possible for
both algorithms to use the same logic in one place.

- Add poll to trace event hist files

It is useful to know when an event is triggered, or even with some
filtering. Since hist files of events get updated when active and the
event is triggered, allow applications to poll the hist file and wake
up when an event is triggered. This will let the application know
that the event it is waiting for happened.

- Add :mod: command to enable events for current or future modules

The function tracer already has a way to enable functions to be
traced in modules by writing ":mod:<module>" into set_ftrace_filter.
That will enable either all the functions for the module if it is
loaded, or if it is not, it will cache that command, and when the
module is loaded that matches <module>, its functions will be
enabled. This also allows init functions to be traced. But currently
events do not have that feature.

Add the command where if ':mod:<module>' is written into set_event,
then either all the modules events are enabled if it is loaded, or
cache it so that the module's events are enabled when it is loaded.
This also works from the kernel command line, where
"trace_event=:mod:<module>", when the module is loaded at boot up,
its events will be enabled then.

* tag 'trace-v6.14-3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (26 commits)
tracing: Fix output of set_event for some cached module events
tracing: Fix allocation of printing set_event file content
tracing: Rename update_cache() to update_mod_cache()
tracing: Fix #if CONFIG_MODULES to #ifdef CONFIG_MODULES
selftests/ftrace: Add test that tests event :mod: commands
tracing: Cache ":mod:" events for modules not loaded yet
tracing: Add :mod: command to enabled module events
selftests/tracing: Add hist poll() support test
tracing/hist: Support POLLPRI event for poll on histogram
tracing/hist: Add poll(POLLIN) support on hist file
tracing: Fix using ret variable in tracing_set_tracer()
tracepoint: Reduce duplication of __DO_TRACE_CALL
tracing/string: Create and use __free(argv_free) in trace_dynevent.c
tracing: Switch trace_stat.c code over to use guard()
tracing: Switch trace_stack.c code over to use guard()
tracing: Switch trace_osnoise.c code over to use guard() and __free()
tracing: Switch trace_events_synth.c code over to use guard()
tracing: Switch trace_events_filter.c code over to use guard()
tracing: Switch trace_events_trigger.c code over to use guard()
tracing: Switch trace_events_hist.c code over to use guard()
...

+1054 -482
+8
Documentation/admin-guide/kernel-parameters.txt
··· 7161 7161 comma-separated list of trace events to enable. See 7162 7162 also Documentation/trace/events.rst 7163 7163 7164 + To enable modules, use :mod: keyword: 7165 + 7166 + trace_event=:mod:<module> 7167 + 7168 + The value before :mod: will only enable specific events 7169 + that are part of the module. See the above mentioned 7170 + document for more information. 7171 + 7164 7172 trace_instance=[instance-info] 7165 7173 [FTRACE] Create a ring buffer instance early in boot up. 7166 7174 This will be listed in:
+24
Documentation/trace/events.rst
··· 55 55 56 56 # echo 'irq:*' > /sys/kernel/tracing/set_event 57 57 58 + The set_event file may also be used to enable events associated to only 59 + a specific module:: 60 + 61 + # echo ':mod:<module>' > /sys/kernel/tracing/set_event 62 + 63 + Will enable all events in the module ``<module>``. If the module is not yet 64 + loaded, the string will be saved and when a module is that matches ``<module>`` 65 + is loaded, then it will apply the enabling of events then. 66 + 67 + The text before ``:mod:`` will be parsed to specify specific events that the 68 + module creates:: 69 + 70 + # echo '<match>:mod:<module>' > /sys/kernel/tracing/set_event 71 + 72 + The above will enable any system or event that ``<match>`` matches. If 73 + ``<match>`` is ``"*"`` then it will match all events. 74 + 75 + To enable only a specific event within a system:: 76 + 77 + # echo '<system>:<event>:mod:<module>' > /sys/kernel/tracing/set_event 78 + 79 + If ``<event>`` is ``"*"`` then it will match all events within the system 80 + for a given module. 81 + 58 82 2.2 Via the 'enable' toggle 59 83 --------------------------- 60 84
+3
include/linux/string.h
··· 4 4 5 5 #include <linux/args.h> 6 6 #include <linux/array_size.h> 7 + #include <linux/cleanup.h> /* for DEFINE_FREE() */ 7 8 #include <linux/compiler.h> /* for inline */ 8 9 #include <linux/types.h> /* for size_t */ 9 10 #include <linux/stddef.h> /* for NULL */ ··· 312 311 /* lib/argv_split.c */ 313 312 extern char **argv_split(gfp_t gfp, const char *str, int *argcp); 314 313 extern void argv_free(char **argv); 314 + 315 + DEFINE_FREE(argv_free, char **, if (!IS_ERR_OR_NULL(_T)) argv_free(_T)) 315 316 316 317 /* lib/cmdline.c */ 317 318 extern int get_option(char **str, int *pint);
+14
include/linux/trace_events.h
··· 673 673 atomic_t tm_ref; /* trigger-mode reference counter */ 674 674 }; 675 675 676 + #ifdef CONFIG_HIST_TRIGGERS 677 + extern struct irq_work hist_poll_work; 678 + extern wait_queue_head_t hist_poll_wq; 679 + 680 + static inline void hist_poll_wakeup(void) 681 + { 682 + if (wq_has_sleeper(&hist_poll_wq)) 683 + irq_work_queue(&hist_poll_work); 684 + } 685 + 686 + #define hist_poll_wait(file, wait) \ 687 + poll_wait(file, &hist_poll_wq, wait) 688 + #endif 689 + 676 690 #define __TRACE_EVENT_FLAGS(name, value) \ 677 691 static int __init trace_init_flags_##name(void) \ 678 692 { \
+7 -13
include/linux/tracepoint.h
··· 218 218 #define __DEFINE_RUST_DO_TRACE(name, proto, args) \ 219 219 notrace void rust_do_trace_##name(proto) \ 220 220 { \ 221 - __rust_do_trace_##name(args); \ 221 + __do_trace_##name(args); \ 222 222 } 223 223 224 224 /* ··· 268 268 269 269 #define __DECLARE_TRACE(name, proto, args, cond, data_proto) \ 270 270 __DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \ 271 - static inline void __rust_do_trace_##name(proto) \ 271 + static inline void __do_trace_##name(proto) \ 272 272 { \ 273 273 if (cond) { \ 274 274 guard(preempt_notrace)(); \ ··· 277 277 } \ 278 278 static inline void trace_##name(proto) \ 279 279 { \ 280 - if (static_branch_unlikely(&__tracepoint_##name.key)) { \ 281 - if (cond) { \ 282 - guard(preempt_notrace)(); \ 283 - __DO_TRACE_CALL(name, TP_ARGS(args)); \ 284 - } \ 285 - } \ 280 + if (static_branch_unlikely(&__tracepoint_##name.key)) \ 281 + __do_trace_##name(args); \ 286 282 if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \ 287 283 WARN_ONCE(!rcu_is_watching(), \ 288 284 "RCU not watching for tracepoint"); \ ··· 287 291 288 292 #define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \ 289 293 __DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \ 290 - static inline void __rust_do_trace_##name(proto) \ 294 + static inline void __do_trace_##name(proto) \ 291 295 { \ 292 296 guard(rcu_tasks_trace)(); \ 293 297 __DO_TRACE_CALL(name, TP_ARGS(args)); \ ··· 295 299 static inline void trace_##name(proto) \ 296 300 { \ 297 301 might_fault(); \ 298 - if (static_branch_unlikely(&__tracepoint_##name.key)) { \ 299 - guard(rcu_tasks_trace)(); \ 300 - __DO_TRACE_CALL(name, TP_ARGS(args)); \ 301 - } \ 302 + if (static_branch_unlikely(&__tracepoint_##name.key)) \ 303 + __do_trace_##name(args); \ 302 304 if (IS_ENABLED(CONFIG_LOCKDEP)) { \ 303 305 WARN_ONCE(!rcu_is_watching(), \ 304 306 "RCU not watching for tracepoint"); \
-17
kernel/trace/ftrace.c
··· 4908 4908 return __ftrace_hash_move_and_update_ops(ops, orig_hash, hash, enable); 4909 4909 } 4910 4910 4911 - static bool module_exists(const char *module) 4912 - { 4913 - /* All modules have the symbol __this_module */ 4914 - static const char this_mod[] = "__this_module"; 4915 - char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2]; 4916 - unsigned long val; 4917 - int n; 4918 - 4919 - n = snprintf(modname, sizeof(modname), "%s:%s", module, this_mod); 4920 - 4921 - if (n > sizeof(modname) - 1) 4922 - return false; 4923 - 4924 - val = module_kallsyms_lookup_name(modname); 4925 - return val != 0; 4926 - } 4927 - 4928 4911 static int cache_mod(struct trace_array *tr, 4929 4912 const char *func, char *module, int enable) 4930 4913 {
+123 -174
kernel/trace/trace.c
··· 26 26 #include <linux/hardirq.h> 27 27 #include <linux/linkage.h> 28 28 #include <linux/uaccess.h> 29 + #include <linux/cleanup.h> 29 30 #include <linux/vmalloc.h> 30 31 #include <linux/ftrace.h> 31 32 #include <linux/module.h> ··· 536 535 int trace_array_get(struct trace_array *this_tr) 537 536 { 538 537 struct trace_array *tr; 539 - int ret = -ENODEV; 540 538 541 - mutex_lock(&trace_types_lock); 539 + guard(mutex)(&trace_types_lock); 542 540 list_for_each_entry(tr, &ftrace_trace_arrays, list) { 543 541 if (tr == this_tr) { 544 542 tr->ref++; 545 - ret = 0; 546 - break; 543 + return 0; 547 544 } 548 545 } 549 - mutex_unlock(&trace_types_lock); 550 546 551 - return ret; 547 + return -ENODEV; 552 548 } 553 549 554 550 static void __trace_array_put(struct trace_array *this_tr) ··· 1441 1443 int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, 1442 1444 cond_update_fn_t update) 1443 1445 { 1444 - struct cond_snapshot *cond_snapshot; 1445 - int ret = 0; 1446 + struct cond_snapshot *cond_snapshot __free(kfree) = 1447 + kzalloc(sizeof(*cond_snapshot), GFP_KERNEL); 1448 + int ret; 1446 1449 1447 - cond_snapshot = kzalloc(sizeof(*cond_snapshot), GFP_KERNEL); 1448 1450 if (!cond_snapshot) 1449 1451 return -ENOMEM; 1450 1452 1451 1453 cond_snapshot->cond_data = cond_data; 1452 1454 cond_snapshot->update = update; 1453 1455 1454 - mutex_lock(&trace_types_lock); 1456 + guard(mutex)(&trace_types_lock); 1455 1457 1456 - if (tr->current_trace->use_max_tr) { 1457 - ret = -EBUSY; 1458 - goto fail_unlock; 1459 - } 1458 + if (tr->current_trace->use_max_tr) 1459 + return -EBUSY; 1460 1460 1461 1461 /* 1462 1462 * The cond_snapshot can only change to NULL without the ··· 1464 1468 * do safely with only holding the trace_types_lock and not 1465 1469 * having to take the max_lock. 1466 1470 */ 1467 - if (tr->cond_snapshot) { 1468 - ret = -EBUSY; 1469 - goto fail_unlock; 1470 - } 1471 + if (tr->cond_snapshot) 1472 + return -EBUSY; 1471 1473 1472 1474 ret = tracing_arm_snapshot_locked(tr); 1473 1475 if (ret) 1474 - goto fail_unlock; 1476 + return ret; 1475 1477 1476 1478 local_irq_disable(); 1477 1479 arch_spin_lock(&tr->max_lock); 1478 - tr->cond_snapshot = cond_snapshot; 1480 + tr->cond_snapshot = no_free_ptr(cond_snapshot); 1479 1481 arch_spin_unlock(&tr->max_lock); 1480 1482 local_irq_enable(); 1481 1483 1482 - mutex_unlock(&trace_types_lock); 1483 - 1484 - return ret; 1485 - 1486 - fail_unlock: 1487 - mutex_unlock(&trace_types_lock); 1488 - kfree(cond_snapshot); 1489 - return ret; 1484 + return 0; 1490 1485 } 1491 1486 EXPORT_SYMBOL_GPL(tracing_snapshot_cond_enable); 1492 1487 ··· 2190 2203 2191 2204 selftests_can_run = true; 2192 2205 2193 - mutex_lock(&trace_types_lock); 2206 + guard(mutex)(&trace_types_lock); 2194 2207 2195 2208 if (list_empty(&postponed_selftests)) 2196 - goto out; 2209 + return 0; 2197 2210 2198 2211 pr_info("Running postponed tracer tests:\n"); 2199 2212 ··· 2221 2234 kfree(p); 2222 2235 } 2223 2236 tracing_selftest_running = false; 2224 - 2225 - out: 2226 - mutex_unlock(&trace_types_lock); 2227 2237 2228 2238 return 0; 2229 2239 } ··· 2791 2807 int save_tracepoint_printk; 2792 2808 int ret; 2793 2809 2794 - mutex_lock(&tracepoint_printk_mutex); 2810 + guard(mutex)(&tracepoint_printk_mutex); 2795 2811 save_tracepoint_printk = tracepoint_printk; 2796 2812 2797 2813 ret = proc_dointvec(table, write, buffer, lenp, ppos); ··· 2804 2820 tracepoint_printk = 0; 2805 2821 2806 2822 if (save_tracepoint_printk == tracepoint_printk) 2807 - goto out; 2823 + return ret; 2808 2824 2809 2825 if (tracepoint_printk) 2810 2826 static_key_enable(&tracepoint_printk_key.key); 2811 2827 else 2812 2828 static_key_disable(&tracepoint_printk_key.key); 2813 - 2814 - out: 2815 - mutex_unlock(&tracepoint_printk_mutex); 2816 2829 2817 2830 return ret; 2818 2831 } ··· 5108 5127 u32 tracer_flags; 5109 5128 int i; 5110 5129 5111 - mutex_lock(&trace_types_lock); 5130 + guard(mutex)(&trace_types_lock); 5131 + 5112 5132 tracer_flags = tr->current_trace->flags->val; 5113 5133 trace_opts = tr->current_trace->flags->opts; 5114 5134 ··· 5126 5144 else 5127 5145 seq_printf(m, "no%s\n", trace_opts[i].name); 5128 5146 } 5129 - mutex_unlock(&trace_types_lock); 5130 5147 5131 5148 return 0; 5132 5149 } ··· 5518 5537 "\t efield: For event probes ('e' types), the field is on of the fields\n" 5519 5538 "\t of the <attached-group>/<attached-event>.\n" 5520 5539 #endif 5540 + " set_event\t\t- Enables events by name written into it\n" 5541 + "\t\t\t Can enable module events via: :mod:<module>\n" 5521 5542 " events/\t\t- Directory containing all trace event subsystems:\n" 5522 5543 " enable\t\t- Write 0/1 to enable/disable tracing of all events\n" 5523 5544 " events/<system>/\t- Directory containing all trace events for <system>:\n" ··· 5792 5809 return; 5793 5810 } 5794 5811 5795 - mutex_lock(&trace_eval_mutex); 5812 + guard(mutex)(&trace_eval_mutex); 5796 5813 5797 5814 if (!trace_eval_maps) 5798 5815 trace_eval_maps = map_array; ··· 5816 5833 map_array++; 5817 5834 } 5818 5835 memset(map_array, 0, sizeof(*map_array)); 5819 - 5820 - mutex_unlock(&trace_eval_mutex); 5821 5836 } 5822 5837 5823 5838 static void trace_create_eval_file(struct dentry *d_tracer) ··· 5979 5998 { 5980 5999 int ret; 5981 6000 5982 - mutex_lock(&trace_types_lock); 6001 + guard(mutex)(&trace_types_lock); 5983 6002 5984 6003 if (cpu_id != RING_BUFFER_ALL_CPUS) { 5985 6004 /* make sure, this cpu is enabled in the mask */ 5986 - if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) { 5987 - ret = -EINVAL; 5988 - goto out; 5989 - } 6005 + if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) 6006 + return -EINVAL; 5990 6007 } 5991 6008 5992 6009 ret = __tracing_resize_ring_buffer(tr, size, cpu_id); 5993 6010 if (ret < 0) 5994 6011 ret = -ENOMEM; 5995 - 5996 - out: 5997 - mutex_unlock(&trace_types_lock); 5998 6012 5999 6013 return ret; 6000 6014 } ··· 6082 6106 #ifdef CONFIG_TRACER_MAX_TRACE 6083 6107 bool had_max_tr; 6084 6108 #endif 6085 - int ret = 0; 6109 + int ret; 6086 6110 6087 - mutex_lock(&trace_types_lock); 6111 + guard(mutex)(&trace_types_lock); 6088 6112 6089 6113 update_last_data(tr); 6090 6114 ··· 6092 6116 ret = __tracing_resize_ring_buffer(tr, trace_buf_size, 6093 6117 RING_BUFFER_ALL_CPUS); 6094 6118 if (ret < 0) 6095 - goto out; 6119 + return ret; 6096 6120 ret = 0; 6097 6121 } 6098 6122 ··· 6100 6124 if (strcmp(t->name, buf) == 0) 6101 6125 break; 6102 6126 } 6103 - if (!t) { 6104 - ret = -EINVAL; 6105 - goto out; 6106 - } 6127 + if (!t) 6128 + return -EINVAL; 6129 + 6107 6130 if (t == tr->current_trace) 6108 - goto out; 6131 + return 0; 6109 6132 6110 6133 #ifdef CONFIG_TRACER_SNAPSHOT 6111 6134 if (t->use_max_tr) { 6112 6135 local_irq_disable(); 6113 6136 arch_spin_lock(&tr->max_lock); 6114 - if (tr->cond_snapshot) 6115 - ret = -EBUSY; 6137 + ret = tr->cond_snapshot ? -EBUSY : 0; 6116 6138 arch_spin_unlock(&tr->max_lock); 6117 6139 local_irq_enable(); 6118 6140 if (ret) 6119 - goto out; 6141 + return ret; 6120 6142 } 6121 6143 #endif 6122 6144 /* Some tracers won't work on kernel command line */ 6123 6145 if (system_state < SYSTEM_RUNNING && t->noboot) { 6124 6146 pr_warn("Tracer '%s' is not allowed on command line, ignored\n", 6125 6147 t->name); 6126 - goto out; 6148 + return -EINVAL; 6127 6149 } 6128 6150 6129 6151 /* Some tracers are only allowed for the top level buffer */ 6130 - if (!trace_ok_for_array(t, tr)) { 6131 - ret = -EINVAL; 6132 - goto out; 6133 - } 6152 + if (!trace_ok_for_array(t, tr)) 6153 + return -EINVAL; 6134 6154 6135 6155 /* If trace pipe files are being read, we can't change the tracer */ 6136 - if (tr->trace_ref) { 6137 - ret = -EBUSY; 6138 - goto out; 6139 - } 6156 + if (tr->trace_ref) 6157 + return -EBUSY; 6140 6158 6141 6159 trace_branch_disable(); 6142 6160 ··· 6161 6191 if (!had_max_tr && t->use_max_tr) { 6162 6192 ret = tracing_arm_snapshot_locked(tr); 6163 6193 if (ret) 6164 - goto out; 6194 + return ret; 6165 6195 } 6166 6196 #else 6167 6197 tr->current_trace = &nop_trace; ··· 6174 6204 if (t->use_max_tr) 6175 6205 tracing_disarm_snapshot(tr); 6176 6206 #endif 6177 - goto out; 6207 + return ret; 6178 6208 } 6179 6209 } 6180 6210 6181 6211 tr->current_trace = t; 6182 6212 tr->current_trace->enabled++; 6183 6213 trace_branch_enable(tr); 6184 - out: 6185 - mutex_unlock(&trace_types_lock); 6186 6214 6187 - return ret; 6215 + return 0; 6188 6216 } 6189 6217 6190 6218 static ssize_t ··· 6260 6292 struct trace_array *tr = filp->private_data; 6261 6293 int ret; 6262 6294 6263 - mutex_lock(&trace_types_lock); 6295 + guard(mutex)(&trace_types_lock); 6264 6296 ret = tracing_nsecs_write(&tracing_thresh, ubuf, cnt, ppos); 6265 6297 if (ret < 0) 6266 - goto out; 6298 + return ret; 6267 6299 6268 6300 if (tr->current_trace->update_thresh) { 6269 6301 ret = tr->current_trace->update_thresh(tr); 6270 6302 if (ret < 0) 6271 - goto out; 6303 + return ret; 6272 6304 } 6273 6305 6274 - ret = cnt; 6275 - out: 6276 - mutex_unlock(&trace_types_lock); 6277 - 6278 - return ret; 6306 + return cnt; 6279 6307 } 6280 6308 6281 6309 #ifdef CONFIG_TRACER_MAX_TRACE ··· 6490 6526 * This is just a matter of traces coherency, the ring buffer itself 6491 6527 * is protected. 6492 6528 */ 6493 - mutex_lock(&iter->mutex); 6529 + guard(mutex)(&iter->mutex); 6494 6530 6495 6531 /* return any leftover data */ 6496 6532 sret = trace_seq_to_user(&iter->seq, ubuf, cnt); 6497 6533 if (sret != -EBUSY) 6498 - goto out; 6534 + return sret; 6499 6535 6500 6536 trace_seq_init(&iter->seq); 6501 6537 6502 6538 if (iter->trace->read) { 6503 6539 sret = iter->trace->read(iter, filp, ubuf, cnt, ppos); 6504 6540 if (sret) 6505 - goto out; 6541 + return sret; 6506 6542 } 6507 6543 6508 6544 waitagain: 6509 6545 sret = tracing_wait_pipe(filp); 6510 6546 if (sret <= 0) 6511 - goto out; 6547 + return sret; 6512 6548 6513 6549 /* stop when tracing is finished */ 6514 - if (trace_empty(iter)) { 6515 - sret = 0; 6516 - goto out; 6517 - } 6550 + if (trace_empty(iter)) 6551 + return 0; 6518 6552 6519 6553 if (cnt >= TRACE_SEQ_BUFFER_SIZE) 6520 6554 cnt = TRACE_SEQ_BUFFER_SIZE - 1; ··· 6575 6613 */ 6576 6614 if (sret == -EBUSY) 6577 6615 goto waitagain; 6578 - 6579 - out: 6580 - mutex_unlock(&iter->mutex); 6581 6616 6582 6617 return sret; 6583 6618 } ··· 7167 7208 */ 7168 7209 int tracing_set_filter_buffering(struct trace_array *tr, bool set) 7169 7210 { 7170 - int ret = 0; 7171 - 7172 - mutex_lock(&trace_types_lock); 7211 + guard(mutex)(&trace_types_lock); 7173 7212 7174 7213 if (set && tr->no_filter_buffering_ref++) 7175 - goto out; 7214 + return 0; 7176 7215 7177 7216 if (!set) { 7178 - if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) { 7179 - ret = -EINVAL; 7180 - goto out; 7181 - } 7217 + if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) 7218 + return -EINVAL; 7182 7219 7183 7220 --tr->no_filter_buffering_ref; 7184 7221 } 7185 - out: 7186 - mutex_unlock(&trace_types_lock); 7187 7222 7188 - return ret; 7223 + return 0; 7189 7224 } 7190 7225 7191 7226 struct ftrace_buffer_info { ··· 7255 7302 if (ret) 7256 7303 return ret; 7257 7304 7258 - mutex_lock(&trace_types_lock); 7305 + guard(mutex)(&trace_types_lock); 7259 7306 7260 - if (tr->current_trace->use_max_tr) { 7261 - ret = -EBUSY; 7262 - goto out; 7263 - } 7307 + if (tr->current_trace->use_max_tr) 7308 + return -EBUSY; 7264 7309 7265 7310 local_irq_disable(); 7266 7311 arch_spin_lock(&tr->max_lock); ··· 7267 7316 arch_spin_unlock(&tr->max_lock); 7268 7317 local_irq_enable(); 7269 7318 if (ret) 7270 - goto out; 7319 + return ret; 7271 7320 7272 7321 switch (val) { 7273 7322 case 0: 7274 - if (iter->cpu_file != RING_BUFFER_ALL_CPUS) { 7275 - ret = -EINVAL; 7276 - break; 7277 - } 7323 + if (iter->cpu_file != RING_BUFFER_ALL_CPUS) 7324 + return -EINVAL; 7278 7325 if (tr->allocated_snapshot) 7279 7326 free_snapshot(tr); 7280 7327 break; 7281 7328 case 1: 7282 7329 /* Only allow per-cpu swap if the ring buffer supports it */ 7283 7330 #ifndef CONFIG_RING_BUFFER_ALLOW_SWAP 7284 - if (iter->cpu_file != RING_BUFFER_ALL_CPUS) { 7285 - ret = -EINVAL; 7286 - break; 7287 - } 7331 + if (iter->cpu_file != RING_BUFFER_ALL_CPUS) 7332 + return -EINVAL; 7288 7333 #endif 7289 7334 if (tr->allocated_snapshot) 7290 7335 ret = resize_buffer_duplicate_size(&tr->max_buffer, ··· 7288 7341 7289 7342 ret = tracing_arm_snapshot_locked(tr); 7290 7343 if (ret) 7291 - break; 7344 + return ret; 7292 7345 7293 7346 /* Now, we're going to swap */ 7294 7347 if (iter->cpu_file == RING_BUFFER_ALL_CPUS) { ··· 7315 7368 *ppos += cnt; 7316 7369 ret = cnt; 7317 7370 } 7318 - out: 7319 - mutex_unlock(&trace_types_lock); 7371 + 7320 7372 return ret; 7321 7373 } 7322 7374 ··· 7701 7755 7702 7756 len += sizeof(CMD_PREFIX) + 2 * sizeof("\n") + strlen(cmd) + 1; 7703 7757 7704 - mutex_lock(&tracing_err_log_lock); 7758 + guard(mutex)(&tracing_err_log_lock); 7759 + 7705 7760 err = get_tracing_log_err(tr, len); 7706 - if (PTR_ERR(err) == -ENOMEM) { 7707 - mutex_unlock(&tracing_err_log_lock); 7761 + if (PTR_ERR(err) == -ENOMEM) 7708 7762 return; 7709 - } 7710 7763 7711 7764 snprintf(err->loc, TRACING_LOG_LOC_MAX, "%s: error: ", loc); 7712 7765 snprintf(err->cmd, len, "\n" CMD_PREFIX "%s\n", cmd); ··· 7716 7771 err->info.ts = local_clock(); 7717 7772 7718 7773 list_add_tail(&err->list, &tr->err_log); 7719 - mutex_unlock(&tracing_err_log_lock); 7720 7774 } 7721 7775 7722 7776 static void clear_tracing_err_log(struct trace_array *tr) ··· 9411 9467 INIT_LIST_HEAD(&tr->hist_vars); 9412 9468 INIT_LIST_HEAD(&tr->err_log); 9413 9469 9470 + #ifdef CONFIG_MODULES 9471 + INIT_LIST_HEAD(&tr->mod_events); 9472 + #endif 9473 + 9414 9474 if (allocate_trace_buffers(tr, trace_buf_size) < 0) 9415 9475 goto out_free_tr; 9416 9476 ··· 9463 9515 struct trace_array *tr; 9464 9516 int ret; 9465 9517 9466 - mutex_lock(&event_mutex); 9467 - mutex_lock(&trace_types_lock); 9518 + guard(mutex)(&event_mutex); 9519 + guard(mutex)(&trace_types_lock); 9468 9520 9469 9521 ret = -EEXIST; 9470 9522 if (trace_array_find(name)) 9471 - goto out_unlock; 9523 + return -EEXIST; 9472 9524 9473 9525 tr = trace_array_create(name); 9474 9526 9475 9527 ret = PTR_ERR_OR_ZERO(tr); 9476 9528 9477 - out_unlock: 9478 - mutex_unlock(&trace_types_lock); 9479 - mutex_unlock(&event_mutex); 9480 9529 return ret; 9481 9530 } 9482 9531 ··· 9523 9578 { 9524 9579 struct trace_array *tr; 9525 9580 9526 - mutex_lock(&event_mutex); 9527 - mutex_lock(&trace_types_lock); 9581 + guard(mutex)(&event_mutex); 9582 + guard(mutex)(&trace_types_lock); 9528 9583 9529 9584 list_for_each_entry(tr, &ftrace_trace_arrays, list) { 9530 - if (tr->name && strcmp(tr->name, name) == 0) 9531 - goto out_unlock; 9585 + if (tr->name && strcmp(tr->name, name) == 0) { 9586 + tr->ref++; 9587 + return tr; 9588 + } 9532 9589 } 9533 9590 9534 9591 tr = trace_array_create_systems(name, systems, 0, 0); 9535 9592 9536 9593 if (IS_ERR(tr)) 9537 9594 tr = NULL; 9538 - out_unlock: 9539 - if (tr) 9595 + else 9540 9596 tr->ref++; 9541 9597 9542 - mutex_unlock(&trace_types_lock); 9543 - mutex_unlock(&event_mutex); 9544 9598 return tr; 9545 9599 } 9546 9600 EXPORT_SYMBOL_GPL(trace_array_get_by_name); ··· 9590 9646 int trace_array_destroy(struct trace_array *this_tr) 9591 9647 { 9592 9648 struct trace_array *tr; 9593 - int ret; 9594 9649 9595 9650 if (!this_tr) 9596 9651 return -EINVAL; 9597 9652 9598 - mutex_lock(&event_mutex); 9599 - mutex_lock(&trace_types_lock); 9653 + guard(mutex)(&event_mutex); 9654 + guard(mutex)(&trace_types_lock); 9600 9655 9601 - ret = -ENODEV; 9602 9656 9603 9657 /* Making sure trace array exists before destroying it. */ 9604 9658 list_for_each_entry(tr, &ftrace_trace_arrays, list) { 9605 - if (tr == this_tr) { 9606 - ret = __remove_instance(tr); 9607 - break; 9608 - } 9659 + if (tr == this_tr) 9660 + return __remove_instance(tr); 9609 9661 } 9610 9662 9611 - mutex_unlock(&trace_types_lock); 9612 - mutex_unlock(&event_mutex); 9613 - 9614 - return ret; 9663 + return -ENODEV; 9615 9664 } 9616 9665 EXPORT_SYMBOL_GPL(trace_array_destroy); 9617 9666 9618 9667 static int instance_rmdir(const char *name) 9619 9668 { 9620 9669 struct trace_array *tr; 9621 - int ret; 9622 9670 9623 - mutex_lock(&event_mutex); 9624 - mutex_lock(&trace_types_lock); 9671 + guard(mutex)(&event_mutex); 9672 + guard(mutex)(&trace_types_lock); 9625 9673 9626 - ret = -ENODEV; 9627 9674 tr = trace_array_find(name); 9628 - if (tr) 9629 - ret = __remove_instance(tr); 9675 + if (!tr) 9676 + return -ENODEV; 9630 9677 9631 - mutex_unlock(&trace_types_lock); 9632 - mutex_unlock(&event_mutex); 9633 - 9634 - return ret; 9678 + return __remove_instance(tr); 9635 9679 } 9636 9680 9637 9681 static __init void create_trace_instances(struct dentry *d_tracer) ··· 9632 9700 if (MEM_FAIL(!trace_instance_dir, "Failed to create instances directory\n")) 9633 9701 return; 9634 9702 9635 - mutex_lock(&event_mutex); 9636 - mutex_lock(&trace_types_lock); 9703 + guard(mutex)(&event_mutex); 9704 + guard(mutex)(&trace_types_lock); 9637 9705 9638 9706 list_for_each_entry(tr, &ftrace_trace_arrays, list) { 9639 9707 if (!tr->name) 9640 9708 continue; 9641 9709 if (MEM_FAIL(trace_array_create_dir(tr) < 0, 9642 9710 "Failed to create instance directory\n")) 9643 - break; 9711 + return; 9644 9712 } 9645 - 9646 - mutex_unlock(&trace_types_lock); 9647 - mutex_unlock(&event_mutex); 9648 9713 } 9649 9714 9650 9715 static void ··· 9831 9902 9832 9903 9833 9904 #ifdef CONFIG_MODULES 9905 + 9906 + bool module_exists(const char *module) 9907 + { 9908 + /* All modules have the symbol __this_module */ 9909 + static const char this_mod[] = "__this_module"; 9910 + char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2]; 9911 + unsigned long val; 9912 + int n; 9913 + 9914 + n = snprintf(modname, sizeof(modname), "%s:%s", module, this_mod); 9915 + 9916 + if (n > sizeof(modname) - 1) 9917 + return false; 9918 + 9919 + val = module_kallsyms_lookup_name(modname); 9920 + return val != 0; 9921 + } 9922 + 9834 9923 static void trace_module_add_evals(struct module *mod) 9835 9924 { 9836 9925 if (!mod->num_trace_evals) ··· 9873 9926 if (!mod->num_trace_evals) 9874 9927 return; 9875 9928 9876 - mutex_lock(&trace_eval_mutex); 9929 + guard(mutex)(&trace_eval_mutex); 9877 9930 9878 9931 map = trace_eval_maps; 9879 9932 ··· 9885 9938 map = map->tail.next; 9886 9939 } 9887 9940 if (!map) 9888 - goto out; 9941 + return; 9889 9942 9890 9943 *last = trace_eval_jmp_to_tail(map)->tail.next; 9891 9944 kfree(map); 9892 - out: 9893 - mutex_unlock(&trace_eval_mutex); 9894 9945 } 9895 9946 #else 9896 9947 static inline void trace_module_remove_evals(struct module *mod) { } ··· 10560 10615 spin_lock_init(&global_trace.snapshot_trigger_lock); 10561 10616 #endif 10562 10617 ftrace_init_global_array_ops(&global_trace); 10618 + 10619 + #ifdef CONFIG_MODULES 10620 + INIT_LIST_HEAD(&global_trace.mod_events); 10621 + #endif 10563 10622 10564 10623 init_trace_flags_index(&global_trace); 10565 10624
+12
kernel/trace/trace.h
··· 400 400 cpumask_var_t pipe_cpumask; 401 401 int ref; 402 402 int trace_ref; 403 + #ifdef CONFIG_MODULES 404 + struct list_head mod_events; 405 + #endif 403 406 #ifdef CONFIG_FUNCTION_TRACER 404 407 struct ftrace_ops *ops; 405 408 struct trace_pid_list __rcu *function_pids; ··· 437 434 TRACE_ARRAY_FL_BOOT = BIT(1), 438 435 TRACE_ARRAY_FL_MOD_INIT = BIT(2), 439 436 }; 437 + 438 + #ifdef CONFIG_MODULES 439 + bool module_exists(const char *module); 440 + #else 441 + static inline bool module_exists(const char *module) 442 + { 443 + return false; 444 + } 445 + #endif 440 446 441 447 extern struct list_head ftrace_trace_arrays; 442 448
+7 -16
kernel/trace/trace_dynevent.c
··· 74 74 struct dyn_event *pos, *n; 75 75 char *system = NULL, *event, *p; 76 76 int argc, ret = -ENOENT; 77 - char **argv; 77 + char **argv __free(argv_free) = argv_split(GFP_KERNEL, raw_command, &argc); 78 78 79 - argv = argv_split(GFP_KERNEL, raw_command, &argc); 80 79 if (!argv) 81 80 return -ENOMEM; 82 81 83 82 if (argv[0][0] == '-') { 84 - if (argv[0][1] != ':') { 85 - ret = -EINVAL; 86 - goto out; 87 - } 83 + if (argv[0][1] != ':') 84 + return -EINVAL; 88 85 event = &argv[0][2]; 89 86 } else { 90 87 event = strchr(argv[0], ':'); 91 - if (!event) { 92 - ret = -EINVAL; 93 - goto out; 94 - } 88 + if (!event) 89 + return -EINVAL; 95 90 event++; 96 91 } 97 92 ··· 96 101 event = p + 1; 97 102 *p = '\0'; 98 103 } 99 - if (!system && event[0] == '\0') { 100 - ret = -EINVAL; 101 - goto out; 102 - } 104 + if (!system && event[0] == '\0') 105 + return -EINVAL; 103 106 104 107 mutex_lock(&event_mutex); 105 108 for_each_dyn_event_safe(pos, n) { ··· 113 120 } 114 121 tracing_reset_all_online_cpus(); 115 122 mutex_unlock(&event_mutex); 116 - out: 117 - argv_free(argv); 118 123 return ret; 119 124 } 120 125
+354 -117
kernel/trace/trace_events.c
··· 869 869 return __ftrace_event_enable_disable(file, enable, 0); 870 870 } 871 871 872 + #ifdef CONFIG_MODULES 873 + struct event_mod_load { 874 + struct list_head list; 875 + char *module; 876 + char *match; 877 + char *system; 878 + char *event; 879 + }; 880 + 881 + static void free_event_mod(struct event_mod_load *event_mod) 882 + { 883 + list_del(&event_mod->list); 884 + kfree(event_mod->module); 885 + kfree(event_mod->match); 886 + kfree(event_mod->system); 887 + kfree(event_mod->event); 888 + kfree(event_mod); 889 + } 890 + 891 + static void clear_mod_events(struct trace_array *tr) 892 + { 893 + struct event_mod_load *event_mod, *n; 894 + 895 + list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) { 896 + free_event_mod(event_mod); 897 + } 898 + } 899 + 900 + static int remove_cache_mod(struct trace_array *tr, const char *mod, 901 + const char *match, const char *system, const char *event) 902 + { 903 + struct event_mod_load *event_mod, *n; 904 + int ret = -EINVAL; 905 + 906 + list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) { 907 + if (strcmp(event_mod->module, mod) != 0) 908 + continue; 909 + 910 + if (match && strcmp(event_mod->match, match) != 0) 911 + continue; 912 + 913 + if (system && 914 + (!event_mod->system || strcmp(event_mod->system, system) != 0)) 915 + continue; 916 + 917 + if (event && 918 + (!event_mod->event || strcmp(event_mod->event, event) != 0)) 919 + continue; 920 + 921 + free_event_mod(event_mod); 922 + ret = 0; 923 + } 924 + 925 + return ret; 926 + } 927 + 928 + static int cache_mod(struct trace_array *tr, const char *mod, int set, 929 + const char *match, const char *system, const char *event) 930 + { 931 + struct event_mod_load *event_mod; 932 + 933 + /* If the module exists, then this just failed to find an event */ 934 + if (module_exists(mod)) 935 + return -EINVAL; 936 + 937 + /* See if this is to remove a cached filter */ 938 + if (!set) 939 + return remove_cache_mod(tr, mod, match, system, event); 940 + 941 + event_mod = kzalloc(sizeof(*event_mod), GFP_KERNEL); 942 + if (!event_mod) 943 + return -ENOMEM; 944 + 945 + INIT_LIST_HEAD(&event_mod->list); 946 + event_mod->module = kstrdup(mod, GFP_KERNEL); 947 + if (!event_mod->module) 948 + goto out_free; 949 + 950 + if (match) { 951 + event_mod->match = kstrdup(match, GFP_KERNEL); 952 + if (!event_mod->match) 953 + goto out_free; 954 + } 955 + 956 + if (system) { 957 + event_mod->system = kstrdup(system, GFP_KERNEL); 958 + if (!event_mod->system) 959 + goto out_free; 960 + } 961 + 962 + if (event) { 963 + event_mod->event = kstrdup(event, GFP_KERNEL); 964 + if (!event_mod->event) 965 + goto out_free; 966 + } 967 + 968 + list_add(&event_mod->list, &tr->mod_events); 969 + 970 + return 0; 971 + 972 + out_free: 973 + free_event_mod(event_mod); 974 + 975 + return -ENOMEM; 976 + } 977 + #else /* CONFIG_MODULES */ 978 + static inline void clear_mod_events(struct trace_array *tr) { } 979 + static int cache_mod(struct trace_array *tr, const char *mod, int set, 980 + const char *match, const char *system, const char *event) 981 + { 982 + return -EINVAL; 983 + } 984 + #endif 985 + 872 986 static void ftrace_clear_events(struct trace_array *tr) 873 987 { 874 988 struct trace_event_file *file; ··· 991 877 list_for_each_entry(file, &tr->events, list) { 992 878 ftrace_event_enable_disable(file, 0); 993 879 } 880 + clear_mod_events(tr); 994 881 mutex_unlock(&event_mutex); 995 882 } 996 883 ··· 1280 1165 */ 1281 1166 static int 1282 1167 __ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match, 1283 - const char *sub, const char *event, int set) 1168 + const char *sub, const char *event, int set, 1169 + const char *mod) 1284 1170 { 1285 1171 struct trace_event_file *file; 1286 1172 struct trace_event_call *call; 1173 + char *module __free(kfree) = NULL; 1287 1174 const char *name; 1288 1175 int ret = -EINVAL; 1289 1176 int eret = 0; 1290 1177 1178 + if (mod) { 1179 + char *p; 1180 + 1181 + module = kstrdup(mod, GFP_KERNEL); 1182 + if (!module) 1183 + return -ENOMEM; 1184 + 1185 + /* Replace all '-' with '_' as that's what modules do */ 1186 + for (p = strchr(module, '-'); p; p = strchr(p + 1, '-')) 1187 + *p = '_'; 1188 + } 1189 + 1291 1190 list_for_each_entry(file, &tr->events, list) { 1292 1191 1293 1192 call = file->event_call; 1193 + 1194 + /* If a module is specified, skip events that are not that module */ 1195 + if (module && (!call->module || strcmp(module_name(call->module), module))) 1196 + continue; 1197 + 1294 1198 name = trace_event_name(call); 1295 1199 1296 1200 if (!name || !call->class || !call->class->reg) ··· 1342 1208 ret = eret; 1343 1209 } 1344 1210 1211 + /* 1212 + * If this is a module setting and nothing was found, 1213 + * check if the module was loaded. If it wasn't cache it. 1214 + */ 1215 + if (module && ret == -EINVAL && !eret) 1216 + ret = cache_mod(tr, module, set, match, sub, event); 1217 + 1345 1218 return ret; 1346 1219 } 1347 1220 1348 1221 static int __ftrace_set_clr_event(struct trace_array *tr, const char *match, 1349 - const char *sub, const char *event, int set) 1222 + const char *sub, const char *event, int set, 1223 + const char *mod) 1350 1224 { 1351 1225 int ret; 1352 1226 1353 1227 mutex_lock(&event_mutex); 1354 - ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set); 1228 + ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set, mod); 1355 1229 mutex_unlock(&event_mutex); 1356 1230 1357 1231 return ret; ··· 1367 1225 1368 1226 int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set) 1369 1227 { 1370 - char *event = NULL, *sub = NULL, *match; 1228 + char *event = NULL, *sub = NULL, *match, *mod; 1371 1229 int ret; 1372 1230 1373 1231 if (!tr) 1374 1232 return -ENOENT; 1233 + 1234 + /* Modules events can be appened with :mod:<module> */ 1235 + mod = strstr(buf, ":mod:"); 1236 + if (mod) { 1237 + *mod = '\0'; 1238 + /* move to the module name */ 1239 + mod += 5; 1240 + } 1241 + 1375 1242 /* 1376 1243 * The buf format can be <subsystem>:<event-name> 1377 1244 * *:<event-name> means any event by that name. ··· 1403 1252 sub = NULL; 1404 1253 if (!strlen(event) || strcmp(event, "*") == 0) 1405 1254 event = NULL; 1255 + } else if (mod) { 1256 + /* Allow wildcard for no length or star */ 1257 + if (!strlen(match) || strcmp(match, "*") == 0) 1258 + match = NULL; 1406 1259 } 1407 1260 1408 - ret = __ftrace_set_clr_event(tr, match, sub, event, set); 1261 + ret = __ftrace_set_clr_event(tr, match, sub, event, set, mod); 1409 1262 1410 1263 /* Put back the colon to allow this to be called again */ 1411 1264 if (buf) ··· 1437 1282 if (!tr) 1438 1283 return -ENODEV; 1439 1284 1440 - return __ftrace_set_clr_event(tr, NULL, system, event, set); 1285 + return __ftrace_set_clr_event(tr, NULL, system, event, set, NULL); 1441 1286 } 1442 1287 EXPORT_SYMBOL_GPL(trace_set_clr_event); 1443 1288 ··· 1463 1308 return -ENOENT; 1464 1309 1465 1310 set = (enable == true) ? 1 : 0; 1466 - return __ftrace_set_clr_event(tr, NULL, system, event, set); 1311 + return __ftrace_set_clr_event(tr, NULL, system, event, set, NULL); 1467 1312 } 1468 1313 EXPORT_SYMBOL_GPL(trace_array_set_clr_event); 1469 1314 ··· 1550 1395 return file; 1551 1396 } 1552 1397 1398 + enum set_event_iter_type { 1399 + SET_EVENT_FILE, 1400 + SET_EVENT_MOD, 1401 + }; 1402 + 1403 + struct set_event_iter { 1404 + enum set_event_iter_type type; 1405 + union { 1406 + struct trace_event_file *file; 1407 + struct event_mod_load *event_mod; 1408 + }; 1409 + }; 1410 + 1553 1411 static void * 1554 1412 s_next(struct seq_file *m, void *v, loff_t *pos) 1555 1413 { 1556 - struct trace_event_file *file = v; 1414 + struct set_event_iter *iter = v; 1415 + struct trace_event_file *file; 1557 1416 struct trace_array *tr = m->private; 1558 1417 1559 1418 (*pos)++; 1560 1419 1561 - list_for_each_entry_continue(file, &tr->events, list) { 1562 - if (file->flags & EVENT_FILE_FL_ENABLED) 1563 - return file; 1420 + if (iter->type == SET_EVENT_FILE) { 1421 + file = iter->file; 1422 + list_for_each_entry_continue(file, &tr->events, list) { 1423 + if (file->flags & EVENT_FILE_FL_ENABLED) { 1424 + iter->file = file; 1425 + return iter; 1426 + } 1427 + } 1428 + #ifdef CONFIG_MODULES 1429 + iter->type = SET_EVENT_MOD; 1430 + iter->event_mod = list_entry(&tr->mod_events, struct event_mod_load, list); 1431 + #endif 1564 1432 } 1433 + 1434 + #ifdef CONFIG_MODULES 1435 + list_for_each_entry_continue(iter->event_mod, &tr->mod_events, list) 1436 + return iter; 1437 + #endif 1565 1438 1566 1439 return NULL; 1567 1440 } 1568 1441 1569 1442 static void *s_start(struct seq_file *m, loff_t *pos) 1570 1443 { 1571 - struct trace_event_file *file; 1572 1444 struct trace_array *tr = m->private; 1445 + struct set_event_iter *iter; 1573 1446 loff_t l; 1447 + 1448 + iter = kzalloc(sizeof(*iter), GFP_KERNEL); 1449 + if (!iter) 1450 + return NULL; 1574 1451 1575 1452 mutex_lock(&event_mutex); 1576 1453 1577 - file = list_entry(&tr->events, struct trace_event_file, list); 1454 + iter->type = SET_EVENT_FILE; 1455 + iter->file = list_entry(&tr->events, struct trace_event_file, list); 1456 + 1578 1457 for (l = 0; l <= *pos; ) { 1579 - file = s_next(m, file, &l); 1580 - if (!file) 1458 + iter = s_next(m, iter, &l); 1459 + if (!iter) 1581 1460 break; 1582 1461 } 1583 - return file; 1462 + return iter; 1584 1463 } 1585 1464 1586 1465 static int t_show(struct seq_file *m, void *v) ··· 1632 1443 static void t_stop(struct seq_file *m, void *p) 1633 1444 { 1634 1445 mutex_unlock(&event_mutex); 1446 + } 1447 + 1448 + #ifdef CONFIG_MODULES 1449 + static int s_show(struct seq_file *m, void *v) 1450 + { 1451 + struct set_event_iter *iter = v; 1452 + const char *system; 1453 + const char *event; 1454 + 1455 + if (iter->type == SET_EVENT_FILE) 1456 + return t_show(m, iter->file); 1457 + 1458 + /* When match is set, system and event are not */ 1459 + if (iter->event_mod->match) { 1460 + seq_printf(m, "%s:mod:%s\n", iter->event_mod->match, 1461 + iter->event_mod->module); 1462 + return 0; 1463 + } 1464 + 1465 + system = iter->event_mod->system ? : "*"; 1466 + event = iter->event_mod->event ? : "*"; 1467 + 1468 + seq_printf(m, "%s:%s:mod:%s\n", system, event, iter->event_mod->module); 1469 + 1470 + return 0; 1471 + } 1472 + #else /* CONFIG_MODULES */ 1473 + static int s_show(struct seq_file *m, void *v) 1474 + { 1475 + struct set_event_iter *iter = v; 1476 + 1477 + return t_show(m, iter->file); 1478 + } 1479 + #endif 1480 + 1481 + static void s_stop(struct seq_file *m, void *p) 1482 + { 1483 + kfree(p); 1484 + t_stop(m, NULL); 1635 1485 } 1636 1486 1637 1487 static void * ··· 1786 1558 if (ret) 1787 1559 return ret; 1788 1560 1561 + guard(mutex)(&event_mutex); 1562 + 1789 1563 switch (val) { 1790 1564 case 0: 1791 1565 case 1: 1792 - ret = -ENODEV; 1793 - mutex_lock(&event_mutex); 1794 1566 file = event_file_file(filp); 1795 - if (likely(file)) { 1796 - ret = tracing_update_buffers(file->tr); 1797 - if (ret < 0) { 1798 - mutex_unlock(&event_mutex); 1799 - return ret; 1800 - } 1801 - ret = ftrace_event_enable_disable(file, val); 1802 - } 1803 - mutex_unlock(&event_mutex); 1567 + if (!file) 1568 + return -ENODEV; 1569 + ret = tracing_update_buffers(file->tr); 1570 + if (ret < 0) 1571 + return ret; 1572 + ret = ftrace_event_enable_disable(file, val); 1573 + if (ret < 0) 1574 + return ret; 1804 1575 break; 1805 1576 1806 1577 default: ··· 1808 1581 1809 1582 *ppos += cnt; 1810 1583 1811 - return ret ? ret : cnt; 1584 + return cnt; 1812 1585 } 1813 1586 1814 1587 static ssize_t ··· 1886 1659 if (system) 1887 1660 name = system->name; 1888 1661 1889 - ret = __ftrace_set_clr_event(dir->tr, NULL, name, NULL, val); 1662 + ret = __ftrace_set_clr_event(dir->tr, NULL, name, NULL, val, NULL); 1890 1663 if (ret) 1891 1664 goto out; 1892 1665 ··· 2384 2157 if (ret < 0) 2385 2158 return ret; 2386 2159 2387 - mutex_lock(&event_mutex); 2160 + guard(mutex)(&event_mutex); 2388 2161 2389 2162 if (type == TRACE_PIDS) { 2390 2163 filtered_pids = rcu_dereference_protected(tr->filtered_pids, ··· 2400 2173 2401 2174 ret = trace_pid_write(filtered_pids, &pid_list, ubuf, cnt); 2402 2175 if (ret < 0) 2403 - goto out; 2176 + return ret; 2404 2177 2405 2178 if (type == TRACE_PIDS) 2406 2179 rcu_assign_pointer(tr->filtered_pids, pid_list); ··· 2425 2198 */ 2426 2199 on_each_cpu(ignore_task_cpu, tr, 1); 2427 2200 2428 - out: 2429 - mutex_unlock(&event_mutex); 2430 - 2431 - if (ret > 0) 2432 - *ppos += ret; 2201 + *ppos += ret; 2433 2202 2434 2203 return ret; 2435 2204 } ··· 2460 2237 static const struct seq_operations show_set_event_seq_ops = { 2461 2238 .start = s_start, 2462 2239 .next = s_next, 2463 - .show = t_show, 2464 - .stop = t_stop, 2240 + .show = s_show, 2241 + .stop = s_stop, 2465 2242 }; 2466 2243 2467 2244 static const struct seq_operations show_set_pid_seq_ops = { ··· 3334 3111 return !*p || isspace(*p) || *p == ','; 3335 3112 } 3336 3113 3114 + #ifdef CONFIG_HIST_TRIGGERS 3115 + /* 3116 + * Wake up waiter on the hist_poll_wq from irq_work because the hist trigger 3117 + * may happen in any context. 3118 + */ 3119 + static void hist_poll_event_irq_work(struct irq_work *work) 3120 + { 3121 + wake_up_all(&hist_poll_wq); 3122 + } 3123 + 3124 + DEFINE_IRQ_WORK(hist_poll_work, hist_poll_event_irq_work); 3125 + DECLARE_WAIT_QUEUE_HEAD(hist_poll_wq); 3126 + #endif 3127 + 3337 3128 static struct trace_event_file * 3338 3129 trace_create_new_event(struct trace_event_call *call, 3339 3130 struct trace_array *tr) ··· 3506 3269 int ret; 3507 3270 lockdep_assert_held(&event_mutex); 3508 3271 3509 - mutex_lock(&trace_types_lock); 3272 + guard(mutex)(&trace_types_lock); 3510 3273 3511 3274 ret = __register_event(call, NULL); 3512 - if (ret >= 0) 3513 - __add_event_to_tracers(call); 3275 + if (ret < 0) 3276 + return ret; 3514 3277 3515 - mutex_unlock(&trace_types_lock); 3278 + __add_event_to_tracers(call); 3516 3279 return ret; 3517 3280 } 3518 3281 EXPORT_SYMBOL_GPL(trace_add_event_call); ··· 3592 3355 event++) 3593 3356 3594 3357 #ifdef CONFIG_MODULES 3358 + static void update_mod_cache(struct trace_array *tr, struct module *mod) 3359 + { 3360 + struct event_mod_load *event_mod, *n; 3361 + 3362 + list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) { 3363 + if (strcmp(event_mod->module, mod->name) != 0) 3364 + continue; 3365 + 3366 + __ftrace_set_clr_event_nolock(tr, event_mod->match, 3367 + event_mod->system, 3368 + event_mod->event, 1, mod->name); 3369 + free_event_mod(event_mod); 3370 + } 3371 + } 3372 + 3373 + static void update_cache_events(struct module *mod) 3374 + { 3375 + struct trace_array *tr; 3376 + 3377 + list_for_each_entry(tr, &ftrace_trace_arrays, list) 3378 + update_mod_cache(tr, mod); 3379 + } 3595 3380 3596 3381 static void trace_module_add_events(struct module *mod) 3597 3382 { ··· 3636 3377 __register_event(*call, mod); 3637 3378 __add_event_to_tracers(*call); 3638 3379 } 3380 + 3381 + update_cache_events(mod); 3639 3382 } 3640 3383 3641 3384 static void trace_module_remove_events(struct module *mod) ··· 3790 3529 return ERR_PTR(ret); 3791 3530 } 3792 3531 3793 - mutex_lock(&event_mutex); 3532 + guard(mutex)(&event_mutex); 3794 3533 3795 3534 file = find_event_file(tr, system, event); 3796 3535 if (!file) { 3797 3536 trace_array_put(tr); 3798 - ret = -EINVAL; 3799 - goto out; 3537 + return ERR_PTR(-EINVAL); 3800 3538 } 3801 3539 3802 3540 /* Don't let event modules unload while in use */ 3803 3541 ret = trace_event_try_get_ref(file->event_call); 3804 3542 if (!ret) { 3805 3543 trace_array_put(tr); 3806 - ret = -EBUSY; 3807 - goto out; 3544 + return ERR_PTR(-EBUSY); 3808 3545 } 3809 - 3810 - ret = 0; 3811 - out: 3812 - mutex_unlock(&event_mutex); 3813 - 3814 - if (ret) 3815 - file = ERR_PTR(ret); 3816 3546 3817 3547 return file; 3818 3548 } ··· 4022 3770 struct trace_event_file *file; 4023 3771 struct ftrace_probe_ops *ops; 4024 3772 struct event_probe_data *data; 3773 + unsigned long count = -1; 4025 3774 const char *system; 4026 3775 const char *event; 4027 3776 char *number; ··· 4042 3789 4043 3790 event = strsep(&param, ":"); 4044 3791 4045 - mutex_lock(&event_mutex); 3792 + guard(mutex)(&event_mutex); 4046 3793 4047 - ret = -EINVAL; 4048 3794 file = find_event_file(tr, system, event); 4049 3795 if (!file) 4050 - goto out; 3796 + return -EINVAL; 4051 3797 4052 3798 enable = strcmp(cmd, ENABLE_EVENT_STR) == 0; 4053 3799 ··· 4055 3803 else 4056 3804 ops = param ? &event_disable_count_probe_ops : &event_disable_probe_ops; 4057 3805 4058 - if (glob[0] == '!') { 4059 - ret = unregister_ftrace_function_probe_func(glob+1, tr, ops); 4060 - goto out; 3806 + if (glob[0] == '!') 3807 + return unregister_ftrace_function_probe_func(glob+1, tr, ops); 3808 + 3809 + if (param) { 3810 + number = strsep(&param, ":"); 3811 + 3812 + if (!strlen(number)) 3813 + return -EINVAL; 3814 + 3815 + /* 3816 + * We use the callback data field (which is a pointer) 3817 + * as our counter. 3818 + */ 3819 + ret = kstrtoul(number, 0, &count); 3820 + if (ret) 3821 + return ret; 4061 3822 } 4062 3823 4063 - ret = -ENOMEM; 4064 - 4065 - data = kzalloc(sizeof(*data), GFP_KERNEL); 4066 - if (!data) 4067 - goto out; 4068 - 4069 - data->enable = enable; 4070 - data->count = -1; 4071 - data->file = file; 4072 - 4073 - if (!param) 4074 - goto out_reg; 4075 - 4076 - number = strsep(&param, ":"); 4077 - 4078 - ret = -EINVAL; 4079 - if (!strlen(number)) 4080 - goto out_free; 4081 - 4082 - /* 4083 - * We use the callback data field (which is a pointer) 4084 - * as our counter. 4085 - */ 4086 - ret = kstrtoul(number, 0, &data->count); 4087 - if (ret) 4088 - goto out_free; 4089 - 4090 - out_reg: 4091 3824 /* Don't let event modules unload while probe registered */ 4092 3825 ret = trace_event_try_get_ref(file->event_call); 4093 - if (!ret) { 4094 - ret = -EBUSY; 4095 - goto out_free; 4096 - } 3826 + if (!ret) 3827 + return -EBUSY; 4097 3828 4098 3829 ret = __ftrace_event_enable_disable(file, 1, 1); 4099 3830 if (ret < 0) 4100 3831 goto out_put; 3832 + 3833 + ret = -ENOMEM; 3834 + data = kzalloc(sizeof(*data), GFP_KERNEL); 3835 + if (!data) 3836 + goto out_put; 3837 + 3838 + data->enable = enable; 3839 + data->count = count; 3840 + data->file = file; 4101 3841 4102 3842 ret = register_ftrace_function_probe(glob, tr, ops, data); 4103 3843 /* ··· 4097 3853 * but if it didn't find any functions it returns zero. 4098 3854 * Consider no functions a failure too. 4099 3855 */ 4100 - if (!ret) { 4101 - ret = -ENOENT; 4102 - goto out_disable; 4103 - } else if (ret < 0) 4104 - goto out_disable; 4105 - /* Just return zero, not the number of enabled functions */ 4106 - ret = 0; 4107 - out: 4108 - mutex_unlock(&event_mutex); 4109 - return ret; 4110 3856 4111 - out_disable: 3857 + /* Just return zero, not the number of enabled functions */ 3858 + if (ret > 0) 3859 + return 0; 3860 + 3861 + kfree(data); 3862 + 3863 + if (!ret) 3864 + ret = -ENOENT; 3865 + 4112 3866 __ftrace_event_enable_disable(file, 0, 1); 4113 3867 out_put: 4114 3868 trace_event_put_ref(file->event_call); 4115 - out_free: 4116 - kfree(data); 4117 - goto out; 3869 + return ret; 4118 3870 } 4119 3871 4120 3872 static struct ftrace_func_command event_enable_cmd = { ··· 4333 4093 { 4334 4094 int ret; 4335 4095 4336 - mutex_lock(&event_mutex); 4096 + guard(mutex)(&event_mutex); 4337 4097 4338 4098 ret = create_event_toplevel_files(parent, tr); 4339 4099 if (ret) 4340 - goto out_unlock; 4100 + return ret; 4341 4101 4342 4102 down_write(&trace_event_sem); 4343 4103 __trace_early_add_event_dirs(tr); 4344 4104 up_write(&trace_event_sem); 4345 4105 4346 - out_unlock: 4347 - mutex_unlock(&event_mutex); 4348 - 4349 - return ret; 4106 + return 0; 4350 4107 } 4351 4108 4352 4109 /* Must be called with event_mutex held */ ··· 4358 4121 __ftrace_clear_event_pids(tr, TRACE_PIDS | TRACE_NO_PIDS); 4359 4122 4360 4123 /* Disable any running events */ 4361 - __ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0); 4124 + __ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0, NULL); 4362 4125 4363 4126 /* Make sure no more events are being executed */ 4364 4127 tracepoint_synchronize_unregister(); ··· 4642 4405 4643 4406 pr_info("Testing event system %s: ", system->name); 4644 4407 4645 - ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 1); 4408 + ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 1, NULL); 4646 4409 if (WARN_ON_ONCE(ret)) { 4647 4410 pr_warn("error enabling system %s\n", 4648 4411 system->name); ··· 4651 4414 4652 4415 event_test_stuff(); 4653 4416 4654 - ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 0); 4417 + ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 0, NULL); 4655 4418 if (WARN_ON_ONCE(ret)) { 4656 4419 pr_warn("error disabling system %s\n", 4657 4420 system->name); ··· 4666 4429 pr_info("Running tests on all trace events:\n"); 4667 4430 pr_info("Testing all events: "); 4668 4431 4669 - ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 1); 4432 + ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 1, NULL); 4670 4433 if (WARN_ON_ONCE(ret)) { 4671 4434 pr_warn("error enabling all events\n"); 4672 4435 return; ··· 4675 4438 event_test_stuff(); 4676 4439 4677 4440 /* reset sysname */ 4678 - ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0); 4441 + ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0, NULL); 4679 4442 if (WARN_ON_ONCE(ret)) { 4680 4443 pr_warn("error disabling all events\n"); 4681 4444 return;
+7 -16
kernel/trace/trace_events_filter.c
··· 2405 2405 struct event_filter *filter = NULL; 2406 2406 int err = 0; 2407 2407 2408 - mutex_lock(&event_mutex); 2408 + guard(mutex)(&event_mutex); 2409 2409 2410 2410 /* Make sure the system still has events */ 2411 - if (!dir->nr_events) { 2412 - err = -ENODEV; 2413 - goto out_unlock; 2414 - } 2411 + if (!dir->nr_events) 2412 + return -ENODEV; 2415 2413 2416 2414 if (!strcmp(strstrip(filter_string), "0")) { 2417 2415 filter_free_subsystem_preds(dir, tr); ··· 2420 2422 tracepoint_synchronize_unregister(); 2421 2423 filter_free_subsystem_filters(dir, tr); 2422 2424 __free_filter(filter); 2423 - goto out_unlock; 2425 + return 0; 2424 2426 } 2425 2427 2426 2428 err = create_system_filter(dir, filter_string, &filter); ··· 2432 2434 __free_filter(system->filter); 2433 2435 system->filter = filter; 2434 2436 } 2435 - out_unlock: 2436 - mutex_unlock(&event_mutex); 2437 2437 2438 2438 return err; 2439 2439 } ··· 2608 2612 struct event_filter *filter = NULL; 2609 2613 struct trace_event_call *call; 2610 2614 2611 - mutex_lock(&event_mutex); 2615 + guard(mutex)(&event_mutex); 2612 2616 2613 2617 call = event->tp_event; 2614 2618 2615 - err = -EINVAL; 2616 2619 if (!call) 2617 - goto out_unlock; 2620 + return -EINVAL; 2618 2621 2619 - err = -EEXIST; 2620 2622 if (event->filter) 2621 - goto out_unlock; 2623 + return -EEXIST; 2622 2624 2623 2625 err = create_filter(NULL, call, filter_str, false, &filter); 2624 2626 if (err) ··· 2630 2636 free_filter: 2631 2637 if (err || ftrace_event_is_function(call)) 2632 2638 __free_filter(filter); 2633 - 2634 - out_unlock: 2635 - mutex_unlock(&event_mutex); 2636 2639 2637 2640 return err; 2638 2641 }
+97 -22
kernel/trace/trace_events_hist.c
··· 5311 5311 5312 5312 if (resolve_var_refs(hist_data, key, var_ref_vals, true)) 5313 5313 hist_trigger_actions(hist_data, elt, buffer, rec, rbe, key, var_ref_vals); 5314 + 5315 + hist_poll_wakeup(); 5314 5316 } 5315 5317 5316 5318 static void hist_trigger_stacktrace_print(struct seq_file *m, ··· 5592 5590 n_entries, (u64)atomic64_read(&hist_data->map->drops)); 5593 5591 } 5594 5592 5593 + struct hist_file_data { 5594 + struct file *file; 5595 + u64 last_read; 5596 + u64 last_act; 5597 + }; 5598 + 5599 + static u64 get_hist_hit_count(struct trace_event_file *event_file) 5600 + { 5601 + struct hist_trigger_data *hist_data; 5602 + struct event_trigger_data *data; 5603 + u64 ret = 0; 5604 + 5605 + list_for_each_entry(data, &event_file->triggers, list) { 5606 + if (data->cmd_ops->trigger_type == ETT_EVENT_HIST) { 5607 + hist_data = data->private_data; 5608 + ret += atomic64_read(&hist_data->map->hits); 5609 + } 5610 + } 5611 + return ret; 5612 + } 5613 + 5595 5614 static int hist_show(struct seq_file *m, void *v) 5596 5615 { 5616 + struct hist_file_data *hist_file = m->private; 5597 5617 struct event_trigger_data *data; 5598 5618 struct trace_event_file *event_file; 5599 - int n = 0, ret = 0; 5619 + int n = 0; 5600 5620 5601 - mutex_lock(&event_mutex); 5621 + guard(mutex)(&event_mutex); 5602 5622 5603 - event_file = event_file_file(m->private); 5604 - if (unlikely(!event_file)) { 5605 - ret = -ENODEV; 5606 - goto out_unlock; 5607 - } 5623 + event_file = event_file_file(hist_file->file); 5624 + if (unlikely(!event_file)) 5625 + return -ENODEV; 5608 5626 5609 5627 list_for_each_entry(data, &event_file->triggers, list) { 5610 5628 if (data->cmd_ops->trigger_type == ETT_EVENT_HIST) 5611 5629 hist_trigger_show(m, data, n++); 5612 5630 } 5631 + hist_file->last_read = get_hist_hit_count(event_file); 5632 + /* 5633 + * Update last_act too so that poll()/POLLPRI can wait for the next 5634 + * event after any syscall on hist file. 5635 + */ 5636 + hist_file->last_act = hist_file->last_read; 5613 5637 5614 - out_unlock: 5615 - mutex_unlock(&event_mutex); 5638 + return 0; 5639 + } 5640 + 5641 + static __poll_t event_hist_poll(struct file *file, struct poll_table_struct *wait) 5642 + { 5643 + struct trace_event_file *event_file; 5644 + struct seq_file *m = file->private_data; 5645 + struct hist_file_data *hist_file = m->private; 5646 + __poll_t ret = 0; 5647 + u64 cnt; 5648 + 5649 + guard(mutex)(&event_mutex); 5650 + 5651 + event_file = event_file_data(file); 5652 + if (!event_file) 5653 + return EPOLLERR; 5654 + 5655 + hist_poll_wait(file, wait); 5656 + 5657 + cnt = get_hist_hit_count(event_file); 5658 + if (hist_file->last_read != cnt) 5659 + ret |= EPOLLIN | EPOLLRDNORM; 5660 + if (hist_file->last_act != cnt) { 5661 + hist_file->last_act = cnt; 5662 + ret |= EPOLLPRI; 5663 + } 5616 5664 5617 5665 return ret; 5618 5666 } 5619 5667 5668 + static int event_hist_release(struct inode *inode, struct file *file) 5669 + { 5670 + struct seq_file *m = file->private_data; 5671 + struct hist_file_data *hist_file = m->private; 5672 + 5673 + kfree(hist_file); 5674 + return tracing_single_release_file_tr(inode, file); 5675 + } 5676 + 5620 5677 static int event_hist_open(struct inode *inode, struct file *file) 5621 5678 { 5679 + struct trace_event_file *event_file; 5680 + struct hist_file_data *hist_file; 5622 5681 int ret; 5623 5682 5624 5683 ret = tracing_open_file_tr(inode, file); 5625 5684 if (ret) 5626 5685 return ret; 5627 5686 5687 + guard(mutex)(&event_mutex); 5688 + 5689 + event_file = event_file_data(file); 5690 + if (!event_file) 5691 + return -ENODEV; 5692 + 5693 + hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL); 5694 + if (!hist_file) 5695 + return -ENOMEM; 5696 + 5697 + hist_file->file = file; 5698 + hist_file->last_act = get_hist_hit_count(event_file); 5699 + 5628 5700 /* Clear private_data to avoid warning in single_open() */ 5629 5701 file->private_data = NULL; 5630 - return single_open(file, hist_show, file); 5702 + ret = single_open(file, hist_show, hist_file); 5703 + if (ret) 5704 + kfree(hist_file); 5705 + 5706 + return ret; 5631 5707 } 5632 5708 5633 5709 const struct file_operations event_hist_fops = { 5634 5710 .open = event_hist_open, 5635 5711 .read = seq_read, 5636 5712 .llseek = seq_lseek, 5637 - .release = tracing_single_release_file_tr, 5713 + .release = event_hist_release, 5714 + .poll = event_hist_poll, 5638 5715 }; 5639 5716 5640 5717 #ifdef CONFIG_HIST_TRIGGERS_DEBUG ··· 5954 5873 { 5955 5874 struct event_trigger_data *data; 5956 5875 struct trace_event_file *event_file; 5957 - int n = 0, ret = 0; 5876 + int n = 0; 5958 5877 5959 - mutex_lock(&event_mutex); 5878 + guard(mutex)(&event_mutex); 5960 5879 5961 5880 event_file = event_file_file(m->private); 5962 - if (unlikely(!event_file)) { 5963 - ret = -ENODEV; 5964 - goto out_unlock; 5965 - } 5881 + if (unlikely(!event_file)) 5882 + return -ENODEV; 5966 5883 5967 5884 list_for_each_entry(data, &event_file->triggers, list) { 5968 5885 if (data->cmd_ops->trigger_type == ETT_EVENT_HIST) 5969 5886 hist_trigger_debug_show(m, data, n++); 5970 5887 } 5971 - 5972 - out_unlock: 5973 - mutex_unlock(&event_mutex); 5974 - 5975 - return ret; 5888 + return 0; 5976 5889 } 5977 5890 5978 5891 static int event_hist_debug_open(struct inode *inode, struct file *file)
+5 -12
kernel/trace/trace_events_synth.c
··· 49 49 50 50 static int errpos(const char *str) 51 51 { 52 - int ret = 0; 53 - 54 - mutex_lock(&lastcmd_mutex); 52 + guard(mutex)(&lastcmd_mutex); 55 53 if (!str || !last_cmd) 56 - goto out; 54 + return 0; 57 55 58 - ret = err_pos(last_cmd, str); 59 - out: 60 - mutex_unlock(&lastcmd_mutex); 61 - return ret; 56 + return err_pos(last_cmd, str); 62 57 } 63 58 64 59 static void last_cmd_set(const char *str) ··· 69 74 70 75 static void synth_err(u8 err_type, u16 err_pos) 71 76 { 72 - mutex_lock(&lastcmd_mutex); 77 + guard(mutex)(&lastcmd_mutex); 73 78 if (!last_cmd) 74 - goto out; 79 + return; 75 80 76 81 tracing_log_err(NULL, "synthetic_events", last_cmd, err_text, 77 82 err_type, err_pos); 78 - out: 79 - mutex_unlock(&lastcmd_mutex); 80 83 } 81 84 82 85 static int create_synth_event(const char *raw_command);
+27 -48
kernel/trace/trace_events_trigger.c
··· 211 211 if (ret) 212 212 return ret; 213 213 214 - mutex_lock(&event_mutex); 214 + guard(mutex)(&event_mutex); 215 215 216 - if (unlikely(!event_file_file(file))) { 217 - mutex_unlock(&event_mutex); 216 + if (unlikely(!event_file_file(file))) 218 217 return -ENODEV; 219 - } 220 218 221 219 if ((file->f_mode & FMODE_WRITE) && 222 220 (file->f_flags & O_TRUNC)) { ··· 237 239 } 238 240 } 239 241 240 - mutex_unlock(&event_mutex); 241 - 242 242 return ret; 243 243 } 244 244 ··· 244 248 { 245 249 char *command, *next; 246 250 struct event_command *p; 247 - int ret = -EINVAL; 248 251 249 252 next = buff = skip_spaces(buff); 250 253 command = strsep(&next, ": \t"); ··· 254 259 } 255 260 command = (command[0] != '!') ? command : command + 1; 256 261 257 - mutex_lock(&trigger_cmd_mutex); 258 - list_for_each_entry(p, &trigger_commands, list) { 259 - if (strcmp(p->name, command) == 0) { 260 - ret = p->parse(p, file, buff, command, next); 261 - goto out_unlock; 262 - } 263 - } 264 - out_unlock: 265 - mutex_unlock(&trigger_cmd_mutex); 262 + guard(mutex)(&trigger_cmd_mutex); 266 263 267 - return ret; 264 + list_for_each_entry(p, &trigger_commands, list) { 265 + if (strcmp(p->name, command) == 0) 266 + return p->parse(p, file, buff, command, next); 267 + } 268 + 269 + return -EINVAL; 268 270 } 269 271 270 272 static ssize_t event_trigger_regex_write(struct file *file, ··· 270 278 { 271 279 struct trace_event_file *event_file; 272 280 ssize_t ret; 273 - char *buf; 281 + char *buf __free(kfree) = NULL; 274 282 275 283 if (!cnt) 276 284 return 0; ··· 284 292 285 293 strim(buf); 286 294 287 - mutex_lock(&event_mutex); 288 - event_file = event_file_file(file); 289 - if (unlikely(!event_file)) { 290 - mutex_unlock(&event_mutex); 291 - kfree(buf); 292 - return -ENODEV; 293 - } 294 - ret = trigger_process_regex(event_file, buf); 295 - mutex_unlock(&event_mutex); 295 + guard(mutex)(&event_mutex); 296 296 297 - kfree(buf); 297 + event_file = event_file_file(file); 298 + if (unlikely(!event_file)) 299 + return -ENODEV; 300 + 301 + ret = trigger_process_regex(event_file, buf); 298 302 if (ret < 0) 299 - goto out; 303 + return ret; 300 304 301 305 *ppos += cnt; 302 - ret = cnt; 303 - out: 304 - return ret; 306 + return cnt; 305 307 } 306 308 307 309 static int event_trigger_regex_release(struct inode *inode, struct file *file) ··· 345 359 __init int register_event_command(struct event_command *cmd) 346 360 { 347 361 struct event_command *p; 348 - int ret = 0; 349 362 350 - mutex_lock(&trigger_cmd_mutex); 363 + guard(mutex)(&trigger_cmd_mutex); 364 + 351 365 list_for_each_entry(p, &trigger_commands, list) { 352 - if (strcmp(cmd->name, p->name) == 0) { 353 - ret = -EBUSY; 354 - goto out_unlock; 355 - } 366 + if (strcmp(cmd->name, p->name) == 0) 367 + return -EBUSY; 356 368 } 357 369 list_add(&cmd->list, &trigger_commands); 358 - out_unlock: 359 - mutex_unlock(&trigger_cmd_mutex); 360 370 361 - return ret; 371 + return 0; 362 372 } 363 373 364 374 /* ··· 364 382 __init int unregister_event_command(struct event_command *cmd) 365 383 { 366 384 struct event_command *p, *n; 367 - int ret = -ENODEV; 368 385 369 - mutex_lock(&trigger_cmd_mutex); 386 + guard(mutex)(&trigger_cmd_mutex); 387 + 370 388 list_for_each_entry_safe(p, n, &trigger_commands, list) { 371 389 if (strcmp(cmd->name, p->name) == 0) { 372 - ret = 0; 373 390 list_del_init(&p->list); 374 - goto out_unlock; 391 + return 0; 375 392 } 376 393 } 377 - out_unlock: 378 - mutex_unlock(&trigger_cmd_mutex); 379 394 380 - return ret; 395 + return -ENODEV; 381 396 } 382 397 383 398 /**
+13 -27
kernel/trace/trace_osnoise.c
··· 2083 2083 { 2084 2084 unsigned int cpu = smp_processor_id(); 2085 2085 2086 - mutex_lock(&trace_types_lock); 2086 + guard(mutex)(&trace_types_lock); 2087 2087 2088 2088 if (!osnoise_has_registered_instances()) 2089 - goto out_unlock_trace; 2089 + return; 2090 2090 2091 - mutex_lock(&interface_lock); 2092 - cpus_read_lock(); 2091 + guard(mutex)(&interface_lock); 2092 + guard(cpus_read_lock)(); 2093 2093 2094 2094 if (!cpu_online(cpu)) 2095 - goto out_unlock; 2095 + return; 2096 + 2096 2097 if (!cpumask_test_cpu(cpu, &osnoise_cpumask)) 2097 - goto out_unlock; 2098 + return; 2098 2099 2099 2100 start_kthread(cpu); 2100 - 2101 - out_unlock: 2102 - cpus_read_unlock(); 2103 - mutex_unlock(&interface_lock); 2104 - out_unlock_trace: 2105 - mutex_unlock(&trace_types_lock); 2106 2101 } 2107 2102 2108 2103 static DECLARE_WORK(osnoise_hotplug_work, osnoise_hotplug_workfn); ··· 2295 2300 osnoise_cpus_read(struct file *filp, char __user *ubuf, size_t count, 2296 2301 loff_t *ppos) 2297 2302 { 2298 - char *mask_str; 2303 + char *mask_str __free(kfree) = NULL; 2299 2304 int len; 2300 2305 2301 - mutex_lock(&interface_lock); 2306 + guard(mutex)(&interface_lock); 2302 2307 2303 2308 len = snprintf(NULL, 0, "%*pbl\n", cpumask_pr_args(&osnoise_cpumask)) + 1; 2304 2309 mask_str = kmalloc(len, GFP_KERNEL); 2305 - if (!mask_str) { 2306 - count = -ENOMEM; 2307 - goto out_unlock; 2308 - } 2310 + if (!mask_str) 2311 + return -ENOMEM; 2309 2312 2310 2313 len = snprintf(mask_str, len, "%*pbl\n", cpumask_pr_args(&osnoise_cpumask)); 2311 - if (len >= count) { 2312 - count = -EINVAL; 2313 - goto out_free; 2314 - } 2314 + if (len >= count) 2315 + return -EINVAL; 2315 2316 2316 2317 count = simple_read_from_buffer(ubuf, count, ppos, mask_str, len); 2317 - 2318 - out_free: 2319 - kfree(mask_str); 2320 - out_unlock: 2321 - mutex_unlock(&interface_lock); 2322 2318 2323 2319 return count; 2324 2320 }
+2 -4
kernel/trace/trace_stack.c
··· 520 520 int was_enabled; 521 521 int ret; 522 522 523 - mutex_lock(&stack_sysctl_mutex); 523 + guard(mutex)(&stack_sysctl_mutex); 524 524 was_enabled = !!stack_tracer_enabled; 525 525 526 526 ret = proc_dointvec(table, write, buffer, lenp, ppos); 527 527 528 528 if (ret || !write || (was_enabled == !!stack_tracer_enabled)) 529 - goto out; 529 + return ret; 530 530 531 531 if (stack_tracer_enabled) 532 532 register_ftrace_function(&trace_ops); 533 533 else 534 534 unregister_ftrace_function(&trace_ops); 535 - out: 536 - mutex_unlock(&stack_sysctl_mutex); 537 535 return ret; 538 536 } 539 537
+10 -16
kernel/trace/trace_stat.c
··· 128 128 int ret = 0; 129 129 int i; 130 130 131 - mutex_lock(&session->stat_mutex); 131 + guard(mutex)(&session->stat_mutex); 132 132 __reset_stat_session(session); 133 133 134 134 if (!ts->stat_cmp) ··· 136 136 137 137 stat = ts->stat_start(ts); 138 138 if (!stat) 139 - goto exit; 139 + return 0; 140 140 141 141 ret = insert_stat(root, stat, ts->stat_cmp); 142 142 if (ret) 143 - goto exit; 143 + return ret; 144 144 145 145 /* 146 146 * Iterate over the tracer stat entries and store them in an rbtree. ··· 157 157 goto exit_free_rbtree; 158 158 } 159 159 160 - exit: 161 - mutex_unlock(&session->stat_mutex); 162 160 return ret; 163 161 164 162 exit_free_rbtree: 165 163 __reset_stat_session(session); 166 - mutex_unlock(&session->stat_mutex); 167 164 return ret; 168 165 } 169 166 ··· 305 308 int register_stat_tracer(struct tracer_stat *trace) 306 309 { 307 310 struct stat_session *session, *node; 308 - int ret = -EINVAL; 311 + int ret; 309 312 310 313 if (!trace) 311 314 return -EINVAL; ··· 313 316 if (!trace->stat_start || !trace->stat_next || !trace->stat_show) 314 317 return -EINVAL; 315 318 319 + guard(mutex)(&all_stat_sessions_mutex); 320 + 316 321 /* Already registered? */ 317 - mutex_lock(&all_stat_sessions_mutex); 318 322 list_for_each_entry(node, &all_stat_sessions, session_list) { 319 323 if (node->ts == trace) 320 - goto out; 324 + return -EINVAL; 321 325 } 322 326 323 - ret = -ENOMEM; 324 327 /* Init the session */ 325 328 session = kzalloc(sizeof(*session), GFP_KERNEL); 326 329 if (!session) 327 - goto out; 330 + return -ENOMEM; 328 331 329 332 session->ts = trace; 330 333 INIT_LIST_HEAD(&session->session_list); ··· 333 336 ret = init_stat_file(session); 334 337 if (ret) { 335 338 destroy_session(session); 336 - goto out; 339 + return ret; 337 340 } 338 341 339 - ret = 0; 340 342 /* Register */ 341 343 list_add_tail(&session->session_list, &all_stat_sessions); 342 - out: 343 - mutex_unlock(&all_stat_sessions_mutex); 344 344 345 - return ret; 345 + return 0; 346 346 } 347 347 348 348 void unregister_stat_tracer(struct tracer_stat *trace)
+2
tools/testing/selftests/ftrace/Makefile
··· 6 6 TEST_FILES := test.d settings 7 7 EXTRA_CLEAN := $(OUTPUT)/logs/* 8 8 9 + TEST_GEN_PROGS = poll 10 + 9 11 include ../lib.mk
+74
tools/testing/selftests/ftrace/poll.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Simple poll on a file. 4 + * 5 + * Copyright (c) 2024 Google LLC. 6 + */ 7 + 8 + #include <errno.h> 9 + #include <fcntl.h> 10 + #include <poll.h> 11 + #include <stdio.h> 12 + #include <stdlib.h> 13 + #include <string.h> 14 + #include <unistd.h> 15 + 16 + #define BUFSIZE 4096 17 + 18 + /* 19 + * Usage: 20 + * poll [-I|-P] [-t timeout] FILE 21 + */ 22 + int main(int argc, char *argv[]) 23 + { 24 + struct pollfd pfd = {.events = POLLIN}; 25 + char buf[BUFSIZE]; 26 + int timeout = -1; 27 + int ret, opt; 28 + 29 + while ((opt = getopt(argc, argv, "IPt:")) != -1) { 30 + switch (opt) { 31 + case 'I': 32 + pfd.events = POLLIN; 33 + break; 34 + case 'P': 35 + pfd.events = POLLPRI; 36 + break; 37 + case 't': 38 + timeout = atoi(optarg); 39 + break; 40 + default: 41 + fprintf(stderr, "Usage: %s [-I|-P] [-t timeout] FILE\n", 42 + argv[0]); 43 + return -1; 44 + } 45 + } 46 + if (optind >= argc) { 47 + fprintf(stderr, "Error: Polling file is not specified\n"); 48 + return -1; 49 + } 50 + 51 + pfd.fd = open(argv[optind], O_RDONLY); 52 + if (pfd.fd < 0) { 53 + fprintf(stderr, "failed to open %s", argv[optind]); 54 + perror("open"); 55 + return -1; 56 + } 57 + 58 + /* Reset poll by read if POLLIN is specified. */ 59 + if (pfd.events & POLLIN) 60 + do {} while (read(pfd.fd, buf, BUFSIZE) == BUFSIZE); 61 + 62 + ret = poll(&pfd, 1, timeout); 63 + if (ret < 0 && errno != EINTR) { 64 + perror("poll"); 65 + return -1; 66 + } 67 + close(pfd.fd); 68 + 69 + /* If timeout happned (ret == 0), exit code is 1 */ 70 + if (ret == 0) 71 + return 1; 72 + 73 + return 0; 74 + }
+191
tools/testing/selftests/ftrace/test.d/event/event-mod.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: event tracing - enable/disable with module event 4 + # requires: set_event "Can enable module events via: :mod:":README 5 + # flags: instance 6 + 7 + rmmod trace-events-sample ||: 8 + if ! modprobe trace-events-sample ; then 9 + echo "No trace-events sample module - please make CONFIG_SAMPLE_TRACE_EVENTS=m" 10 + exit_unresolved; 11 + fi 12 + trap "rmmod trace-events-sample" EXIT 13 + 14 + # Set events for the module 15 + echo ":mod:trace-events-sample" > set_event 16 + 17 + test_all_enabled() { 18 + 19 + # Check if more than one is enabled 20 + grep -q sample-trace:foo_bar set_event 21 + grep -q sample-trace:foo_bar_with_cond set_event 22 + grep -q sample-trace:foo_bar_with_fn set_event 23 + 24 + # All of them should be enabled. Check via the enable file 25 + val=`cat events/sample-trace/enable` 26 + if [ $val -ne 1 ]; then 27 + exit_fail 28 + fi 29 + } 30 + 31 + clear_events() { 32 + echo > set_event 33 + val=`cat events/enable` 34 + if [ "$val" != "0" ]; then 35 + exit_fail 36 + fi 37 + count=`cat set_event | wc -l` 38 + if [ $count -ne 0 ]; then 39 + exit_fail 40 + fi 41 + } 42 + 43 + test_all_enabled 44 + 45 + echo clear all events 46 + echo 0 > events/enable 47 + 48 + echo Confirm the events are disabled 49 + val=`cat events/sample-trace/enable` 50 + if [ $val -ne 0 ]; then 51 + exit_fail 52 + fi 53 + 54 + echo And the set_event file is empty 55 + 56 + cnt=`wc -l set_event` 57 + if [ $cnt -ne 0 ]; then 58 + exit_fail 59 + fi 60 + 61 + echo now enable all events 62 + echo 1 > events/enable 63 + 64 + echo Confirm the events are enabled again 65 + val=`cat events/sample-trace/enable` 66 + if [ $val -ne 1 ]; then 67 + exit_fail 68 + fi 69 + 70 + echo disable just the module events 71 + echo '!:mod:trace-events-sample' >> set_event 72 + 73 + echo Should have mix of events enabled 74 + val=`cat events/enable` 75 + if [ "$val" != "X" ]; then 76 + exit_fail 77 + fi 78 + 79 + echo Confirm the module events are disabled 80 + val=`cat events/sample-trace/enable` 81 + if [ $val -ne 0 ]; then 82 + exit_fail 83 + fi 84 + 85 + echo 0 > events/enable 86 + 87 + echo now enable the system events 88 + echo 'sample-trace:mod:trace-events-sample' > set_event 89 + 90 + test_all_enabled 91 + 92 + echo clear all events 93 + echo 0 > events/enable 94 + 95 + echo Confirm the events are disabled 96 + val=`cat events/sample-trace/enable` 97 + if [ $val -ne 0 ]; then 98 + exit_fail 99 + fi 100 + 101 + echo Test enabling foo_bar only 102 + echo 'foo_bar:mod:trace-events-sample' > set_event 103 + 104 + grep -q sample-trace:foo_bar set_event 105 + 106 + echo make sure nothing is found besides foo_bar 107 + if grep -q -v sample-trace:foo_bar set_event ; then 108 + exit_fail 109 + fi 110 + 111 + echo Append another using the system and event name 112 + echo 'sample-trace:foo_bar_with_cond:mod:trace-events-sample' >> set_event 113 + 114 + grep -q sample-trace:foo_bar set_event 115 + grep -q sample-trace:foo_bar_with_cond set_event 116 + 117 + count=`cat set_event | wc -l` 118 + 119 + if [ $count -ne 2 ]; then 120 + exit_fail 121 + fi 122 + 123 + clear_events 124 + 125 + rmmod trace-events-sample 126 + 127 + echo ':mod:trace-events-sample' > set_event 128 + 129 + echo make sure that the module shows up, and '-' is converted to '_' 130 + grep -q '\*:\*:mod:trace_events_sample' set_event 131 + 132 + modprobe trace-events-sample 133 + 134 + test_all_enabled 135 + 136 + clear_events 137 + 138 + rmmod trace-events-sample 139 + 140 + echo Enable just the system events 141 + echo 'sample-trace:mod:trace-events-sample' > set_event 142 + grep -q 'sample-trace:mod:trace_events_sample' set_event 143 + 144 + modprobe trace-events-sample 145 + 146 + test_all_enabled 147 + 148 + clear_events 149 + 150 + rmmod trace-events-sample 151 + 152 + echo Enable event with just event name 153 + echo 'foo_bar:mod:trace-events-sample' > set_event 154 + grep -q 'foo_bar:mod:trace_events_sample' set_event 155 + 156 + echo Enable another event with both system and event name 157 + echo 'sample-trace:foo_bar_with_cond:mod:trace-events-sample' >> set_event 158 + grep -q 'sample-trace:foo_bar_with_cond:mod:trace_events_sample' set_event 159 + echo Make sure the other event was still there 160 + grep -q 'foo_bar:mod:trace_events_sample' set_event 161 + 162 + modprobe trace-events-sample 163 + 164 + echo There should be no :mod: cached events 165 + if grep -q ':mod:' set_event; then 166 + exit_fail 167 + fi 168 + 169 + echo two events should be enabled 170 + count=`cat set_event | wc -l` 171 + if [ $count -ne 2 ]; then 172 + exit_fail 173 + fi 174 + 175 + echo only two events should be enabled 176 + val=`cat events/sample-trace/enable` 177 + if [ "$val" != "X" ]; then 178 + exit_fail 179 + fi 180 + 181 + val=`cat events/sample-trace/foo_bar/enable` 182 + if [ "$val" != "1" ]; then 183 + exit_fail 184 + fi 185 + 186 + val=`cat events/sample-trace/foo_bar_with_cond/enable` 187 + if [ "$val" != "1" ]; then 188 + exit_fail 189 + fi 190 + 191 + clear_trace
+74
tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-poll.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: event trigger - test poll wait on histogram 4 + # requires: set_event events/sched/sched_process_free/trigger events/sched/sched_process_free/hist 5 + # flags: instance 6 + 7 + POLL=${FTRACETEST_ROOT}/poll 8 + 9 + if [ ! -x ${POLL} ]; then 10 + echo "poll program is not compiled!" 11 + exit_unresolved 12 + fi 13 + 14 + EVENT=events/sched/sched_process_free/ 15 + 16 + # Check poll ops is supported. Before implementing poll on hist file, it 17 + # returns soon with POLLIN | POLLOUT, but not POLLPRI. 18 + 19 + # This must wait >1 sec and return 1 (timeout). 20 + set +e 21 + ${POLL} -I -t 1000 ${EVENT}/hist 22 + ret=$? 23 + set -e 24 + if [ ${ret} != 1 ]; then 25 + echo "poll on hist file is not supported" 26 + exit_unsupported 27 + fi 28 + 29 + # Test POLLIN 30 + echo > trace 31 + echo 'hist:key=comm if comm =="sleep"' > ${EVENT}/trigger 32 + echo 1 > ${EVENT}/enable 33 + 34 + # This sleep command will exit after 2 seconds. 35 + sleep 2 & 36 + BGPID=$! 37 + # if timeout happens, poll returns 1. 38 + ${POLL} -I -t 4000 ${EVENT}/hist 39 + echo 0 > tracing_on 40 + 41 + if [ -d /proc/${BGPID} ]; then 42 + echo "poll exits too soon" 43 + kill -KILL ${BGPID} ||: 44 + exit_fail 45 + fi 46 + 47 + if ! grep -qw "sleep" trace; then 48 + echo "poll exits before event happens" 49 + exit_fail 50 + fi 51 + 52 + # Test POLLPRI 53 + echo > trace 54 + echo 1 > tracing_on 55 + 56 + # This sleep command will exit after 2 seconds. 57 + sleep 2 & 58 + BGPID=$! 59 + # if timeout happens, poll returns 1. 60 + ${POLL} -P -t 4000 ${EVENT}/hist 61 + echo 0 > tracing_on 62 + 63 + if [ -d /proc/${BGPID} ]; then 64 + echo "poll exits too soon" 65 + kill -KILL ${BGPID} ||: 66 + exit_fail 67 + fi 68 + 69 + if ! grep -qw "sleep" trace; then 70 + echo "poll exits before event happens" 71 + exit_fail 72 + fi 73 + 74 + exit_pass