Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
"This pull request has a lot of work done. The main thing is the
changes to the ftrace function callback infrastructure. It's
introducing a way to allow different functions to call directly
different trampolines instead of all calling the same "mcount" one.

The only user of this for now is the function graph tracer, which
always had a different trampoline, but the function tracer trampoline
was called and did basically nothing, and then the function graph
tracer trampoline was called. The difference now, is that the
function graph tracer trampoline can be called directly if a function
is only being traced by the function graph trampoline. If function
tracing is also happening on the same function, the old way is still
done.

The accounting for this takes up more memory when function graph
tracing is activated, as it needs to keep track of which functions it
uses. I have a new way that wont take as much memory, but it's not
ready yet for this merge window, and will have to wait for the next
one.

Another big change was the removal of the ftrace_start/stop() calls
that were used by the suspend/resume code that stopped function
tracing when entering into suspend and resume paths. The stop of
ftrace was done because there was some function that would crash the
system if one called smp_processor_id()! The stop/start was a big
hammer to solve the issue at the time, which was when ftrace was first
introduced into Linux. Now ftrace has better infrastructure to debug
such issues, and I found the problem function and labeled it with
"notrace" and function tracing can now safely be activated all the way
down into the guts of suspend and resume

Other changes include clean ups of uprobe code, clean up of the
trace_seq() code, and other various small fixes and clean ups to
ftrace and tracing"

* tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (57 commits)
ftrace: Add warning if tramp hash does not match nr_trampolines
ftrace: Fix trampoline hash update check on rec->flags
ring-buffer: Use rb_page_size() instead of open coded head_page size
ftrace: Rename ftrace_ops field from trampolines to nr_trampolines
tracing: Convert local function_graph functions to static
ftrace: Do not copy old hash when resetting
tracing: let user specify tracing_thresh after selecting function_graph
ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST
s390/ftrace: remove check of obsolete variable function_trace_stop
arm64, ftrace: Remove check of obsolete variable function_trace_stop
Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
metag: ftrace: Remove check of obsolete variable function_trace_stop
microblaze: ftrace: Remove check of obsolete variable function_trace_stop
MIPS: ftrace: Remove check of obsolete variable function_trace_stop
parisc: ftrace: Remove check of obsolete variable function_trace_stop
sh: ftrace: Remove check of obsolete variable function_trace_stop
sparc64,ftrace: Remove check of obsolete variable function_trace_stop
tile: ftrace: Remove check of obsolete variable function_trace_stop
ftrace: x86: Remove check of obsolete variable function_trace_stop
...

+1032 -693
+6
Documentation/kernel-parameters.txt
··· 1097 1097 that can be changed at run time by the 1098 1098 set_graph_function file in the debugfs tracing directory. 1099 1099 1100 + ftrace_graph_notrace=[function-list] 1101 + [FTRACE] Do not trace from the functions specified in 1102 + function-list. This list is a comma separated list of 1103 + functions that can be changed at run time by the 1104 + set_graph_notrace file in the debugfs tracing directory. 1105 + 1100 1106 gamecon.map[2|3]= 1101 1107 [HW,JOY] Multisystem joystick and NES/SNES/PSX pad 1102 1108 support via parallel port (up to 5 devices per port)
-26
Documentation/trace/ftrace-design.txt
··· 102 102 EXPORT_SYMBOL(mcount); 103 103 104 104 105 - HAVE_FUNCTION_TRACE_MCOUNT_TEST 106 - ------------------------------- 107 - 108 - This is an optional optimization for the normal case when tracing is turned off 109 - in the system. If you do not enable this Kconfig option, the common ftrace 110 - code will take care of doing the checking for you. 111 - 112 - To support this feature, you only need to check the function_trace_stop 113 - variable in the mcount function. If it is non-zero, there is no tracing to be 114 - done at all, so you can return. 115 - 116 - This additional pseudo code would simply be: 117 - void mcount(void) 118 - { 119 - /* save any bare state needed in order to do initial checking */ 120 - 121 - + if (function_trace_stop) 122 - + return; 123 - 124 - extern void (*ftrace_trace_function)(unsigned long, unsigned long); 125 - if (ftrace_trace_function != ftrace_stub) 126 - ... 127 - 128 - 129 105 HAVE_FUNCTION_GRAPH_TRACER 130 106 -------------------------- 131 107 ··· 304 328 305 329 void ftrace_caller(void) 306 330 { 307 - /* implement HAVE_FUNCTION_TRACE_MCOUNT_TEST if you desire */ 308 - 309 331 /* save all state needed by the ABI (see paragraph above) */ 310 332 311 333 unsigned long frompc = ...;
-5
arch/arm64/kernel/entry-ftrace.S
··· 96 96 * - ftrace_graph_caller to set up an exit hook 97 97 */ 98 98 ENTRY(_mcount) 99 - #ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST 100 - ldr x0, =ftrace_trace_stop 101 - ldr x0, [x0] // if ftrace_trace_stop 102 - ret // return; 103 - #endif 104 99 mcount_enter 105 100 106 101 ldr x0, =ftrace_trace_function
-1
arch/blackfin/Kconfig
··· 18 18 select HAVE_FTRACE_MCOUNT_RECORD 19 19 select HAVE_FUNCTION_GRAPH_TRACER 20 20 select HAVE_FUNCTION_TRACER 21 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 22 21 select HAVE_IDE 23 22 select HAVE_KERNEL_GZIP if RAMKERNEL 24 23 select HAVE_KERNEL_BZIP2 if RAMKERNEL
-18
arch/blackfin/kernel/ftrace-entry.S
··· 33 33 * function will be waiting there. mmmm pie. 34 34 */ 35 35 ENTRY(_ftrace_caller) 36 - # ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST 37 - /* optional micro optimization: return if stopped */ 38 - p1.l = _function_trace_stop; 39 - p1.h = _function_trace_stop; 40 - r3 = [p1]; 41 - cc = r3 == 0; 42 - if ! cc jump _ftrace_stub (bp); 43 - # endif 44 - 45 36 /* save first/second/third function arg and the return register */ 46 37 [--sp] = r2; 47 38 [--sp] = r0; ··· 74 83 75 84 /* See documentation for _ftrace_caller */ 76 85 ENTRY(__mcount) 77 - # ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST 78 - /* optional micro optimization: return if stopped */ 79 - p1.l = _function_trace_stop; 80 - p1.h = _function_trace_stop; 81 - r3 = [p1]; 82 - cc = r3 == 0; 83 - if ! cc jump _ftrace_stub (bp); 84 - # endif 85 - 86 86 /* save third function arg early so we can do testing below */ 87 87 [--sp] = r2; 88 88
-1
arch/metag/Kconfig
··· 13 13 select HAVE_DYNAMIC_FTRACE 14 14 select HAVE_FTRACE_MCOUNT_RECORD 15 15 select HAVE_FUNCTION_TRACER 16 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 17 16 select HAVE_KERNEL_BZIP2 18 17 select HAVE_KERNEL_GZIP 19 18 select HAVE_KERNEL_LZO
-14
arch/metag/kernel/ftrace_stub.S
··· 16 16 .global _ftrace_caller 17 17 .type _ftrace_caller,function 18 18 _ftrace_caller: 19 - MOVT D0Re0,#HI(_function_trace_stop) 20 - ADD D0Re0,D0Re0,#LO(_function_trace_stop) 21 - GETD D0Re0,[D0Re0] 22 - CMP D0Re0,#0 23 - BEQ $Lcall_stub 24 - MOV PC,D0.4 25 - $Lcall_stub: 26 19 MSETL [A0StP], D0Ar6, D0Ar4, D0Ar2, D0.4 27 20 MOV D1Ar1, D0.4 28 21 MOV D0Ar2, D1RtP ··· 35 42 .global _mcount_wrapper 36 43 .type _mcount_wrapper,function 37 44 _mcount_wrapper: 38 - MOVT D0Re0,#HI(_function_trace_stop) 39 - ADD D0Re0,D0Re0,#LO(_function_trace_stop) 40 - GETD D0Re0,[D0Re0] 41 - CMP D0Re0,#0 42 - BEQ $Lcall_mcount 43 - MOV PC,D0.4 44 - $Lcall_mcount: 45 45 MSETL [A0StP], D0Ar6, D0Ar4, D0Ar2, D0.4 46 46 MOV D1Ar1, D0.4 47 47 MOV D0Ar2, D1RtP
-1
arch/microblaze/Kconfig
··· 22 22 select HAVE_DYNAMIC_FTRACE 23 23 select HAVE_FTRACE_MCOUNT_RECORD 24 24 select HAVE_FUNCTION_GRAPH_TRACER 25 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 26 25 select HAVE_FUNCTION_TRACER 27 26 select HAVE_MEMBLOCK 28 27 select HAVE_MEMBLOCK_NODE_MAP
+3
arch/microblaze/kernel/ftrace.c
··· 27 27 unsigned long return_hooker = (unsigned long) 28 28 &return_to_handler; 29 29 30 + if (unlikely(ftrace_graph_is_dead())) 31 + return; 32 + 30 33 if (unlikely(atomic_read(&current->tracing_graph_pause))) 31 34 return; 32 35
-5
arch/microblaze/kernel/mcount.S
··· 91 91 #endif /* CONFIG_DYNAMIC_FTRACE */ 92 92 SAVE_REGS 93 93 swi r15, r1, 0; 94 - /* MS: HAVE_FUNCTION_TRACE_MCOUNT_TEST begin of checking */ 95 - lwi r5, r0, function_trace_stop; 96 - bneid r5, end; 97 - nop; 98 - /* MS: HAVE_FUNCTION_TRACE_MCOUNT_TEST end of checking */ 99 94 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 100 95 #ifndef CONFIG_DYNAMIC_FTRACE 101 96 lwi r5, r0, ftrace_graph_return;
-1
arch/mips/Kconfig
··· 15 15 select HAVE_BPF_JIT if !CPU_MICROMIPS 16 16 select ARCH_HAVE_CUSTOM_GPIO_H 17 17 select HAVE_FUNCTION_TRACER 18 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 19 18 select HAVE_DYNAMIC_FTRACE 20 19 select HAVE_FTRACE_MCOUNT_RECORD 21 20 select HAVE_C_RECORDMCOUNT
+3
arch/mips/kernel/ftrace.c
··· 302 302 &return_to_handler; 303 303 int faulted, insns; 304 304 305 + if (unlikely(ftrace_graph_is_dead())) 306 + return; 307 + 305 308 if (unlikely(atomic_read(&current->tracing_graph_pause))) 306 309 return; 307 310
-7
arch/mips/kernel/mcount.S
··· 74 74 #endif 75 75 76 76 /* When tracing is activated, it calls ftrace_caller+8 (aka here) */ 77 - lw t1, function_trace_stop 78 - bnez t1, ftrace_stub 79 - nop 80 - 81 77 MCOUNT_SAVE_REGS 82 78 #ifdef KBUILD_MCOUNT_RA_ADDRESS 83 79 PTR_S MCOUNT_RA_ADDRESS_REG, PT_R12(sp) ··· 101 105 #else /* ! CONFIG_DYNAMIC_FTRACE */ 102 106 103 107 NESTED(_mcount, PT_SIZE, ra) 104 - lw t1, function_trace_stop 105 - bnez t1, ftrace_stub 106 - nop 107 108 PTR_LA t1, ftrace_stub 108 109 PTR_L t2, ftrace_trace_function /* Prepare t2 for (1) */ 109 110 bne t1, t2, static_trace
-1
arch/parisc/Kconfig
··· 6 6 select HAVE_OPROFILE 7 7 select HAVE_FUNCTION_TRACER if 64BIT 8 8 select HAVE_FUNCTION_GRAPH_TRACER if 64BIT 9 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST if 64BIT 10 9 select ARCH_WANT_FRAME_POINTERS 11 10 select RTC_CLASS 12 11 select RTC_DRV_GENERIC
+3 -3
arch/parisc/kernel/ftrace.c
··· 112 112 unsigned long long calltime; 113 113 struct ftrace_graph_ent trace; 114 114 115 + if (unlikely(ftrace_graph_is_dead())) 116 + return; 117 + 115 118 if (unlikely(atomic_read(&current->tracing_graph_pause))) 116 119 return; 117 120 ··· 154 151 unsigned long org_sp_gr3) 155 152 { 156 153 extern ftrace_func_t ftrace_trace_function; 157 - 158 - if (function_trace_stop) 159 - return; 160 154 161 155 if (ftrace_trace_function != ftrace_stub) { 162 156 ftrace_trace_function(parent, self_addr);
+3
arch/powerpc/kernel/ftrace.c
··· 525 525 struct ftrace_graph_ent trace; 526 526 unsigned long return_hooker = (unsigned long)&return_to_handler; 527 527 528 + if (unlikely(ftrace_graph_is_dead())) 529 + return; 530 + 528 531 if (unlikely(atomic_read(&current->tracing_graph_pause))) 529 532 return; 530 533
-1
arch/s390/Kconfig
··· 116 116 select HAVE_FTRACE_MCOUNT_RECORD 117 117 select HAVE_FUNCTION_GRAPH_TRACER 118 118 select HAVE_FUNCTION_TRACER 119 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 120 119 select HAVE_FUTEX_CMPXCHG if FUTEX 121 120 select HAVE_KERNEL_BZIP2 122 121 select HAVE_KERNEL_GZIP
+3 -7
arch/s390/kernel/mcount.S
··· 21 21 ENTRY(ftrace_caller) 22 22 #endif 23 23 stm %r2,%r5,16(%r15) 24 - bras %r1,2f 24 + bras %r1,1f 25 25 0: .long ftrace_trace_function 26 - 1: .long function_trace_stop 27 - 2: l %r2,1b-0b(%r1) 28 - icm %r2,0xf,0(%r2) 29 - jnz 3f 30 - st %r14,56(%r15) 26 + 1: st %r14,56(%r15) 31 27 lr %r0,%r15 32 28 ahi %r15,-96 33 29 l %r3,100(%r15) ··· 46 50 #endif 47 51 ahi %r15,96 48 52 l %r14,56(%r15) 49 - 3: lm %r2,%r5,16(%r15) 53 + lm %r2,%r5,16(%r15) 50 54 br %r14 51 55 52 56 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-3
arch/s390/kernel/mcount64.S
··· 20 20 21 21 ENTRY(ftrace_caller) 22 22 #endif 23 - larl %r1,function_trace_stop 24 - icm %r1,0xf,0(%r1) 25 - bnzr %r14 26 23 stmg %r2,%r5,32(%r15) 27 24 stg %r14,112(%r15) 28 25 lgr %r1,%r15
-1
arch/sh/Kconfig
··· 57 57 select HAVE_FUNCTION_TRACER 58 58 select HAVE_FTRACE_MCOUNT_RECORD 59 59 select HAVE_DYNAMIC_FTRACE 60 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 61 60 select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE 62 61 select ARCH_WANT_IPC_PARSE_VERSION 63 62 select HAVE_FUNCTION_GRAPH_TRACER
+3
arch/sh/kernel/ftrace.c
··· 344 344 struct ftrace_graph_ent trace; 345 345 unsigned long return_hooker = (unsigned long)&return_to_handler; 346 346 347 + if (unlikely(ftrace_graph_is_dead())) 348 + return; 349 + 347 350 if (unlikely(atomic_read(&current->tracing_graph_pause))) 348 351 return; 349 352
+2 -22
arch/sh/lib/mcount.S
··· 92 92 rts 93 93 nop 94 94 #else 95 - #ifndef CONFIG_DYNAMIC_FTRACE 96 - mov.l .Lfunction_trace_stop, r0 97 - mov.l @r0, r0 98 - tst r0, r0 99 - bf ftrace_stub 100 - #endif 101 - 102 95 MCOUNT_ENTER() 103 96 104 97 #ifdef CONFIG_DYNAMIC_FTRACE ··· 167 174 168 175 .globl ftrace_caller 169 176 ftrace_caller: 170 - mov.l .Lfunction_trace_stop, r0 171 - mov.l @r0, r0 172 - tst r0, r0 173 - bf ftrace_stub 174 - 175 177 MCOUNT_ENTER() 176 178 177 179 .globl ftrace_call ··· 184 196 #endif /* CONFIG_DYNAMIC_FTRACE */ 185 197 186 198 .align 2 187 - .Lfunction_trace_stop: 188 - .long function_trace_stop 189 199 190 200 /* 191 201 * NOTE: From here on the locations of the .Lftrace_stub label and ··· 203 217 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 204 218 .globl ftrace_graph_caller 205 219 ftrace_graph_caller: 206 - mov.l 2f, r0 207 - mov.l @r0, r0 208 - tst r0, r0 209 - bt 1f 210 - 211 - mov.l 3f, r1 220 + mov.l 2f, r1 212 221 jmp @r1 213 222 nop 214 223 1: ··· 223 242 MCOUNT_LEAVE() 224 243 225 244 .align 2 226 - 2: .long function_trace_stop 227 - 3: .long skip_trace 245 + 2: .long skip_trace 228 246 .Lprepare_ftrace_return: 229 247 .long prepare_ftrace_return 230 248
-1
arch/sparc/Kconfig
··· 55 55 select HAVE_FUNCTION_TRACER 56 56 select HAVE_FUNCTION_GRAPH_TRACER 57 57 select HAVE_FUNCTION_GRAPH_FP_TEST 58 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 59 58 select HAVE_KRETPROBES 60 59 select HAVE_KPROBES 61 60 select HAVE_RCU_TABLE_FREE if SMP
+2 -8
arch/sparc/lib/mcount.S
··· 24 24 #ifdef CONFIG_DYNAMIC_FTRACE 25 25 /* Do nothing, the retl/nop below is all we need. */ 26 26 #else 27 - sethi %hi(function_trace_stop), %g1 28 - lduw [%g1 + %lo(function_trace_stop)], %g2 29 - brnz,pn %g2, 2f 30 - sethi %hi(ftrace_trace_function), %g1 27 + sethi %hi(ftrace_trace_function), %g1 31 28 sethi %hi(ftrace_stub), %g2 32 29 ldx [%g1 + %lo(ftrace_trace_function)], %g1 33 30 or %g2, %lo(ftrace_stub), %g2 ··· 77 80 .globl ftrace_caller 78 81 .type ftrace_caller,#function 79 82 ftrace_caller: 80 - sethi %hi(function_trace_stop), %g1 81 83 mov %i7, %g2 82 - lduw [%g1 + %lo(function_trace_stop)], %g1 83 - brnz,pn %g1, ftrace_stub 84 - mov %fp, %g3 84 + mov %fp, %g3 85 85 save %sp, -176, %sp 86 86 mov %g2, %o1 87 87 mov %g2, %l0
-1
arch/tile/Kconfig
··· 128 128 select SPARSE_IRQ 129 129 select GENERIC_IRQ_LEGACY_ALLOC_HWIRQ 130 130 select HAVE_FUNCTION_TRACER 131 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 132 131 select HAVE_FUNCTION_GRAPH_TRACER 133 132 select HAVE_DYNAMIC_FTRACE 134 133 select HAVE_FTRACE_MCOUNT_RECORD
-18
arch/tile/kernel/mcount_64.S
··· 77 77 78 78 .align 64 79 79 STD_ENTRY(ftrace_caller) 80 - moveli r11, hw2_last(function_trace_stop) 81 - { shl16insli r11, r11, hw1(function_trace_stop); move r12, lr } 82 - { shl16insli r11, r11, hw0(function_trace_stop); move lr, r10 } 83 - ld r11, r11 84 - beqz r11, 1f 85 - jrp r12 86 - 87 - 1: 88 - { move r10, lr; move lr, r12 } 89 80 MCOUNT_SAVE_REGS 90 81 91 82 /* arg1: self return address */ ··· 110 119 111 120 .align 64 112 121 STD_ENTRY(__mcount) 113 - moveli r11, hw2_last(function_trace_stop) 114 - { shl16insli r11, r11, hw1(function_trace_stop); move r12, lr } 115 - { shl16insli r11, r11, hw0(function_trace_stop); move lr, r10 } 116 - ld r11, r11 117 - beqz r11, 1f 118 - jrp r12 119 - 120 - 1: 121 - { move r10, lr; move lr, r12 } 122 122 { 123 123 moveli r11, hw2_last(ftrace_trace_function) 124 124 moveli r13, hw2_last(ftrace_stub)
-1
arch/x86/Kconfig
··· 54 54 select HAVE_FUNCTION_TRACER 55 55 select HAVE_FUNCTION_GRAPH_TRACER 56 56 select HAVE_FUNCTION_GRAPH_FP_TEST 57 - select HAVE_FUNCTION_TRACE_MCOUNT_TEST 58 57 select HAVE_SYSCALL_TRACEPOINTS 59 58 select SYSCTL_EXCEPTION_TRACE 60 59 select HAVE_KVM
+2
arch/x86/include/asm/ftrace.h
··· 68 68 69 69 int ftrace_int3_handler(struct pt_regs *regs); 70 70 71 + #define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR 72 + 71 73 #endif /* CONFIG_DYNAMIC_FTRACE */ 72 74 #endif /* __ASSEMBLY__ */ 73 75 #endif /* CONFIG_FUNCTION_TRACER */
-9
arch/x86/kernel/entry_32.S
··· 1059 1059 END(mcount) 1060 1060 1061 1061 ENTRY(ftrace_caller) 1062 - cmpl $0, function_trace_stop 1063 - jne ftrace_stub 1064 - 1065 1062 pushl %eax 1066 1063 pushl %ecx 1067 1064 pushl %edx ··· 1090 1093 1091 1094 ENTRY(ftrace_regs_caller) 1092 1095 pushf /* push flags before compare (in cs location) */ 1093 - cmpl $0, function_trace_stop 1094 - jne ftrace_restore_flags 1095 1096 1096 1097 /* 1097 1098 * i386 does not save SS and ESP when coming from kernel. ··· 1148 1153 popf /* Pop flags at end (no addl to corrupt flags) */ 1149 1154 jmp ftrace_ret 1150 1155 1151 - ftrace_restore_flags: 1152 1156 popf 1153 1157 jmp ftrace_stub 1154 1158 #else /* ! CONFIG_DYNAMIC_FTRACE */ ··· 1155 1161 ENTRY(mcount) 1156 1162 cmpl $__PAGE_OFFSET, %esp 1157 1163 jb ftrace_stub /* Paging not enabled yet? */ 1158 - 1159 - cmpl $0, function_trace_stop 1160 - jne ftrace_stub 1161 1164 1162 1165 cmpl $ftrace_stub, ftrace_trace_function 1163 1166 jnz trace
+3
arch/x86/kernel/ftrace.c
··· 703 703 unsigned long return_hooker = (unsigned long) 704 704 &return_to_handler; 705 705 706 + if (unlikely(ftrace_graph_is_dead())) 707 + return; 708 + 706 709 if (unlikely(atomic_read(&current->tracing_graph_pause))) 707 710 return; 708 711
+1 -12
arch/x86/kernel/mcount_64.S
··· 46 46 .endm 47 47 48 48 ENTRY(ftrace_caller) 49 - /* Check if tracing was disabled (quick check) */ 50 - cmpl $0, function_trace_stop 51 - jne ftrace_stub 52 - 53 49 ftrace_caller_setup 54 50 /* regs go into 4th parameter (but make it NULL) */ 55 51 movq $0, %rcx ··· 68 72 ENTRY(ftrace_regs_caller) 69 73 /* Save the current flags before compare (in SS location)*/ 70 74 pushfq 71 - 72 - /* Check if tracing was disabled (quick check) */ 73 - cmpl $0, function_trace_stop 74 - jne ftrace_restore_flags 75 75 76 76 /* skip=8 to skip flags saved in SS */ 77 77 ftrace_caller_setup 8 ··· 123 131 popfq 124 132 125 133 jmp ftrace_return 126 - ftrace_restore_flags: 134 + 127 135 popfq 128 136 jmp ftrace_stub 129 137 ··· 133 141 #else /* ! CONFIG_DYNAMIC_FTRACE */ 134 142 135 143 ENTRY(function_hook) 136 - cmpl $0, function_trace_stop 137 - jne ftrace_stub 138 - 139 144 cmpq $ftrace_stub, ftrace_trace_function 140 145 jnz trace 141 146
+1 -1
arch/x86/kvm/mmutrace.h
··· 22 22 __entry->unsync = sp->unsync; 23 23 24 24 #define KVM_MMU_PAGE_PRINTK() ({ \ 25 - const char *ret = p->buffer + p->len; \ 25 + const char *ret = trace_seq_buffer_ptr(p); \ 26 26 static const char *access_str[] = { \ 27 27 "---", "--x", "w--", "w-x", "-u-", "-ux", "wu-", "wux" \ 28 28 }; \
+2 -2
arch/x86/power/cpu.c
··· 165 165 * by __save_processor_state() 166 166 * @ctxt - structure to load the registers contents from 167 167 */ 168 - static void __restore_processor_state(struct saved_context *ctxt) 168 + static void notrace __restore_processor_state(struct saved_context *ctxt) 169 169 { 170 170 if (ctxt->misc_enable_saved) 171 171 wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable); ··· 239 239 } 240 240 241 241 /* Needed by apm.c */ 242 - void restore_processor_state(void) 242 + void notrace restore_processor_state(void) 243 243 { 244 244 __restore_processor_state(&saved_context); 245 245 }
+8 -8
drivers/scsi/scsi_trace.c
··· 28 28 static const char * 29 29 scsi_trace_rw6(struct trace_seq *p, unsigned char *cdb, int len) 30 30 { 31 - const char *ret = p->buffer + p->len; 31 + const char *ret = trace_seq_buffer_ptr(p); 32 32 sector_t lba = 0, txlen = 0; 33 33 34 34 lba |= ((cdb[1] & 0x1F) << 16); ··· 46 46 static const char * 47 47 scsi_trace_rw10(struct trace_seq *p, unsigned char *cdb, int len) 48 48 { 49 - const char *ret = p->buffer + p->len; 49 + const char *ret = trace_seq_buffer_ptr(p); 50 50 sector_t lba = 0, txlen = 0; 51 51 52 52 lba |= (cdb[2] << 24); ··· 71 71 static const char * 72 72 scsi_trace_rw12(struct trace_seq *p, unsigned char *cdb, int len) 73 73 { 74 - const char *ret = p->buffer + p->len; 74 + const char *ret = trace_seq_buffer_ptr(p); 75 75 sector_t lba = 0, txlen = 0; 76 76 77 77 lba |= (cdb[2] << 24); ··· 94 94 static const char * 95 95 scsi_trace_rw16(struct trace_seq *p, unsigned char *cdb, int len) 96 96 { 97 - const char *ret = p->buffer + p->len; 97 + const char *ret = trace_seq_buffer_ptr(p); 98 98 sector_t lba = 0, txlen = 0; 99 99 100 100 lba |= ((u64)cdb[2] << 56); ··· 125 125 static const char * 126 126 scsi_trace_rw32(struct trace_seq *p, unsigned char *cdb, int len) 127 127 { 128 - const char *ret = p->buffer + p->len, *cmd; 128 + const char *ret = trace_seq_buffer_ptr(p), *cmd; 129 129 sector_t lba = 0, txlen = 0; 130 130 u32 ei_lbrt = 0; 131 131 ··· 180 180 static const char * 181 181 scsi_trace_unmap(struct trace_seq *p, unsigned char *cdb, int len) 182 182 { 183 - const char *ret = p->buffer + p->len; 183 + const char *ret = trace_seq_buffer_ptr(p); 184 184 unsigned int regions = cdb[7] << 8 | cdb[8]; 185 185 186 186 trace_seq_printf(p, "regions=%u", (regions - 8) / 16); ··· 192 192 static const char * 193 193 scsi_trace_service_action_in(struct trace_seq *p, unsigned char *cdb, int len) 194 194 { 195 - const char *ret = p->buffer + p->len, *cmd; 195 + const char *ret = trace_seq_buffer_ptr(p), *cmd; 196 196 sector_t lba = 0; 197 197 u32 alloc_len = 0; 198 198 ··· 247 247 static const char * 248 248 scsi_trace_misc(struct trace_seq *p, unsigned char *cdb, int len) 249 249 { 250 - const char *ret = p->buffer + p->len; 250 + const char *ret = trace_seq_buffer_ptr(p); 251 251 252 252 trace_seq_printf(p, "-"); 253 253 trace_seq_putc(p, 0);
+31 -37
include/linux/ftrace.h
··· 33 33 * features, then it must call an indirect function that 34 34 * does. Or at least does enough to prevent any unwelcomed side effects. 35 35 */ 36 - #if !defined(CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST) || \ 37 - !ARCH_SUPPORTS_FTRACE_OPS 36 + #if !ARCH_SUPPORTS_FTRACE_OPS 38 37 # define FTRACE_FORCE_LIST_FUNC 1 39 38 #else 40 39 # define FTRACE_FORCE_LIST_FUNC 0 ··· 117 118 ftrace_func_t func; 118 119 struct ftrace_ops *next; 119 120 unsigned long flags; 120 - int __percpu *disabled; 121 121 void *private; 122 + int __percpu *disabled; 122 123 #ifdef CONFIG_DYNAMIC_FTRACE 124 + int nr_trampolines; 123 125 struct ftrace_hash *notrace_hash; 124 126 struct ftrace_hash *filter_hash; 127 + struct ftrace_hash *tramp_hash; 125 128 struct mutex regex_lock; 129 + unsigned long trampoline; 126 130 #endif 127 131 }; 128 - 129 - extern int function_trace_stop; 130 132 131 133 /* 132 134 * Type of the current tracing. ··· 139 139 140 140 /* Current tracing type, default is FTRACE_TYPE_ENTER */ 141 141 extern enum ftrace_tracing_type_t ftrace_tracing_type; 142 - 143 - /** 144 - * ftrace_stop - stop function tracer. 145 - * 146 - * A quick way to stop the function tracer. Note this an on off switch, 147 - * it is not something that is recursive like preempt_disable. 148 - * This does not disable the calling of mcount, it only stops the 149 - * calling of functions from mcount. 150 - */ 151 - static inline void ftrace_stop(void) 152 - { 153 - function_trace_stop = 1; 154 - } 155 - 156 - /** 157 - * ftrace_start - start the function tracer. 158 - * 159 - * This function is the inverse of ftrace_stop. This does not enable 160 - * the function tracing if the function tracer is disabled. This only 161 - * sets the function tracer flag to continue calling the functions 162 - * from mcount. 163 - */ 164 - static inline void ftrace_start(void) 165 - { 166 - function_trace_stop = 0; 167 - } 168 142 169 143 /* 170 144 * The ftrace_ops must be a static and should also ··· 216 242 } 217 243 static inline void clear_ftrace_function(void) { } 218 244 static inline void ftrace_kill(void) { } 219 - static inline void ftrace_stop(void) { } 220 - static inline void ftrace_start(void) { } 221 245 #endif /* CONFIG_FUNCTION_TRACER */ 222 246 223 247 #ifdef CONFIG_STACK_TRACER ··· 289 317 * from tracing that function. 290 318 */ 291 319 enum { 292 - FTRACE_FL_ENABLED = (1UL << 29), 320 + FTRACE_FL_ENABLED = (1UL << 31), 293 321 FTRACE_FL_REGS = (1UL << 30), 294 - FTRACE_FL_REGS_EN = (1UL << 31) 322 + FTRACE_FL_REGS_EN = (1UL << 29), 323 + FTRACE_FL_TRAMP = (1UL << 28), 324 + FTRACE_FL_TRAMP_EN = (1UL << 27), 295 325 }; 296 326 297 - #define FTRACE_FL_MASK (0x7UL << 29) 298 - #define FTRACE_REF_MAX ((1UL << 29) - 1) 327 + #define FTRACE_REF_MAX_SHIFT 27 328 + #define FTRACE_FL_BITS 5 329 + #define FTRACE_FL_MASKED_BITS ((1UL << FTRACE_FL_BITS) - 1) 330 + #define FTRACE_FL_MASK (FTRACE_FL_MASKED_BITS << FTRACE_REF_MAX_SHIFT) 331 + #define FTRACE_REF_MAX ((1UL << FTRACE_REF_MAX_SHIFT) - 1) 332 + 333 + #define ftrace_rec_count(rec) ((rec)->flags & ~FTRACE_FL_MASK) 299 334 300 335 struct dyn_ftrace { 301 336 unsigned long ip; /* address of mcount call-site */ ··· 410 431 #define FTRACE_ADDR ((unsigned long)ftrace_caller) 411 432 #endif 412 433 434 + #ifndef FTRACE_GRAPH_ADDR 435 + #define FTRACE_GRAPH_ADDR ((unsigned long)ftrace_graph_caller) 436 + #endif 437 + 413 438 #ifndef FTRACE_REGS_ADDR 414 439 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 415 440 # define FTRACE_REGS_ADDR ((unsigned long)ftrace_regs_caller) 416 441 #else 417 442 # define FTRACE_REGS_ADDR FTRACE_ADDR 418 443 #endif 444 + #endif 445 + 446 + /* 447 + * If an arch would like functions that are only traced 448 + * by the function graph tracer to jump directly to its own 449 + * trampoline, then they can define FTRACE_GRAPH_TRAMP_ADDR 450 + * to be that address to jump to. 451 + */ 452 + #ifndef FTRACE_GRAPH_TRAMP_ADDR 453 + #define FTRACE_GRAPH_TRAMP_ADDR ((unsigned long) 0) 419 454 #endif 420 455 421 456 #ifdef CONFIG_FUNCTION_GRAPH_TRACER ··· 729 736 extern int register_ftrace_graph(trace_func_graph_ret_t retfunc, 730 737 trace_func_graph_ent_t entryfunc); 731 738 739 + extern bool ftrace_graph_is_dead(void); 732 740 extern void ftrace_graph_stop(void); 733 741 734 742 /* The current handlers in use */
+23 -13
include/linux/trace_seq.h
··· 25 25 s->full = 0; 26 26 } 27 27 28 + /** 29 + * trace_seq_buffer_ptr - return pointer to next location in buffer 30 + * @s: trace sequence descriptor 31 + * 32 + * Returns the pointer to the buffer where the next write to 33 + * the buffer will happen. This is useful to save the location 34 + * that is about to be written to and then return the result 35 + * of that write. 36 + */ 37 + static inline unsigned char * 38 + trace_seq_buffer_ptr(struct trace_seq *s) 39 + { 40 + return s->buffer + s->len; 41 + } 42 + 28 43 /* 29 44 * Currently only defined when tracing is enabled. 30 45 */ ··· 51 36 extern int 52 37 trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary); 53 38 extern int trace_print_seq(struct seq_file *m, struct trace_seq *s); 54 - extern ssize_t trace_seq_to_user(struct trace_seq *s, char __user *ubuf, 55 - size_t cnt); 39 + extern int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, 40 + int cnt); 56 41 extern int trace_seq_puts(struct trace_seq *s, const char *str); 57 42 extern int trace_seq_putc(struct trace_seq *s, unsigned char c); 58 - extern int trace_seq_putmem(struct trace_seq *s, const void *mem, size_t len); 43 + extern int trace_seq_putmem(struct trace_seq *s, const void *mem, unsigned int len); 59 44 extern int trace_seq_putmem_hex(struct trace_seq *s, const void *mem, 60 - size_t len); 61 - extern void *trace_seq_reserve(struct trace_seq *s, size_t len); 45 + unsigned int len); 62 46 extern int trace_seq_path(struct trace_seq *s, const struct path *path); 63 47 64 48 extern int trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp, ··· 85 71 { 86 72 return 0; 87 73 } 88 - static inline ssize_t trace_seq_to_user(struct trace_seq *s, char __user *ubuf, 89 - size_t cnt) 74 + static inline int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, 75 + int cnt) 90 76 { 91 77 return 0; 92 78 } ··· 99 85 return 0; 100 86 } 101 87 static inline int 102 - trace_seq_putmem(struct trace_seq *s, const void *mem, size_t len) 88 + trace_seq_putmem(struct trace_seq *s, const void *mem, unsigned int len) 103 89 { 104 90 return 0; 105 91 } 106 92 static inline int trace_seq_putmem_hex(struct trace_seq *s, const void *mem, 107 - size_t len) 93 + unsigned int len) 108 94 { 109 95 return 0; 110 - } 111 - static inline void *trace_seq_reserve(struct trace_seq *s, size_t len) 112 - { 113 - return NULL; 114 96 } 115 97 static inline int trace_seq_path(struct trace_seq *s, const struct path *path) 116 98 {
-6
kernel/power/hibernate.c
··· 371 371 } 372 372 373 373 suspend_console(); 374 - ftrace_stop(); 375 374 pm_restrict_gfp_mask(); 376 375 377 376 error = dpm_suspend(PMSG_FREEZE); ··· 396 397 if (error || !in_suspend) 397 398 pm_restore_gfp_mask(); 398 399 399 - ftrace_start(); 400 400 resume_console(); 401 401 dpm_complete(msg); 402 402 ··· 498 500 499 501 pm_prepare_console(); 500 502 suspend_console(); 501 - ftrace_stop(); 502 503 pm_restrict_gfp_mask(); 503 504 error = dpm_suspend_start(PMSG_QUIESCE); 504 505 if (!error) { ··· 505 508 dpm_resume_end(PMSG_RECOVER); 506 509 } 507 510 pm_restore_gfp_mask(); 508 - ftrace_start(); 509 511 resume_console(); 510 512 pm_restore_console(); 511 513 return error; ··· 531 535 532 536 entering_platform_hibernation = true; 533 537 suspend_console(); 534 - ftrace_stop(); 535 538 error = dpm_suspend_start(PMSG_HIBERNATE); 536 539 if (error) { 537 540 if (hibernation_ops->recover) ··· 574 579 Resume_devices: 575 580 entering_platform_hibernation = false; 576 581 dpm_resume_end(PMSG_RESTORE); 577 - ftrace_start(); 578 582 resume_console(); 579 583 580 584 Close:
-2
kernel/power/suspend.c
··· 248 248 goto Platform_wake; 249 249 } 250 250 251 - ftrace_stop(); 252 251 error = disable_nonboot_cpus(); 253 252 if (error || suspend_test(TEST_CPUS)) 254 253 goto Enable_cpus; ··· 274 275 275 276 Enable_cpus: 276 277 enable_nonboot_cpus(); 277 - ftrace_start(); 278 278 279 279 Platform_wake: 280 280 if (need_suspend_ops(state) && suspend_ops->wake)
-5
kernel/trace/Kconfig
··· 29 29 help 30 30 See Documentation/trace/ftrace-design.txt 31 31 32 - config HAVE_FUNCTION_TRACE_MCOUNT_TEST 33 - bool 34 - help 35 - See Documentation/trace/ftrace-design.txt 36 - 37 32 config HAVE_DYNAMIC_FTRACE 38 33 bool 39 34 help
+1
kernel/trace/Makefile
··· 28 28 29 29 obj-$(CONFIG_TRACING) += trace.o 30 30 obj-$(CONFIG_TRACING) += trace_output.o 31 + obj-$(CONFIG_TRACING) += trace_seq.o 31 32 obj-$(CONFIG_TRACING) += trace_stat.o 32 33 obj-$(CONFIG_TRACING) += trace_printk.o 33 34 obj-$(CONFIG_CONTEXT_SWITCH_TRACER) += trace_sched_switch.o
+361 -84
kernel/trace/ftrace.c
··· 80 80 int ftrace_enabled __read_mostly; 81 81 static int last_ftrace_enabled; 82 82 83 - /* Quick disabling of function tracer. */ 84 - int function_trace_stop __read_mostly; 85 - 86 83 /* Current function tracing op */ 87 84 struct ftrace_ops *function_trace_op __read_mostly = &ftrace_list_end; 88 85 /* What to set function_trace_op to */ ··· 1039 1042 1040 1043 #ifdef CONFIG_DYNAMIC_FTRACE 1041 1044 1045 + static struct ftrace_ops *removed_ops; 1046 + 1042 1047 #ifndef CONFIG_FTRACE_MCOUNT_RECORD 1043 1048 # error Dynamic ftrace depends on MCOUNT_RECORD 1044 1049 #endif ··· 1303 1304 struct ftrace_hash *new_hash; 1304 1305 int size = src->count; 1305 1306 int bits = 0; 1306 - int ret; 1307 1307 int i; 1308 - 1309 - /* 1310 - * Remove the current set, update the hash and add 1311 - * them back. 1312 - */ 1313 - ftrace_hash_rec_disable(ops, enable); 1314 1308 1315 1309 /* 1316 1310 * If the new source is empty, just free dst and assign it 1317 1311 * the empty_hash. 1318 1312 */ 1319 1313 if (!src->count) { 1320 - free_ftrace_hash_rcu(*dst); 1321 - rcu_assign_pointer(*dst, EMPTY_HASH); 1322 - /* still need to update the function records */ 1323 - ret = 0; 1324 - goto out; 1314 + new_hash = EMPTY_HASH; 1315 + goto update; 1325 1316 } 1326 1317 1327 1318 /* ··· 1324 1335 if (bits > FTRACE_HASH_MAX_BITS) 1325 1336 bits = FTRACE_HASH_MAX_BITS; 1326 1337 1327 - ret = -ENOMEM; 1328 1338 new_hash = alloc_ftrace_hash(bits); 1329 1339 if (!new_hash) 1330 - goto out; 1340 + return -ENOMEM; 1331 1341 1332 1342 size = 1 << src->size_bits; 1333 1343 for (i = 0; i < size; i++) { ··· 1337 1349 } 1338 1350 } 1339 1351 1352 + update: 1353 + /* 1354 + * Remove the current set, update the hash and add 1355 + * them back. 1356 + */ 1357 + ftrace_hash_rec_disable(ops, enable); 1358 + 1340 1359 old_hash = *dst; 1341 1360 rcu_assign_pointer(*dst, new_hash); 1342 1361 free_ftrace_hash_rcu(old_hash); 1343 1362 1344 - ret = 0; 1345 - out: 1346 - /* 1347 - * Enable regardless of ret: 1348 - * On success, we enable the new hash. 1349 - * On failure, we re-enable the original hash. 1350 - */ 1351 1363 ftrace_hash_rec_enable(ops, enable); 1352 1364 1353 - return ret; 1365 + return 0; 1354 1366 } 1355 1367 1356 1368 /* ··· 1480 1492 return (int)!!ret; 1481 1493 } 1482 1494 1495 + /* Test if ops registered to this rec needs regs */ 1496 + static bool test_rec_ops_needs_regs(struct dyn_ftrace *rec) 1497 + { 1498 + struct ftrace_ops *ops; 1499 + bool keep_regs = false; 1500 + 1501 + for (ops = ftrace_ops_list; 1502 + ops != &ftrace_list_end; ops = ops->next) { 1503 + /* pass rec in as regs to have non-NULL val */ 1504 + if (ftrace_ops_test(ops, rec->ip, rec)) { 1505 + if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { 1506 + keep_regs = true; 1507 + break; 1508 + } 1509 + } 1510 + } 1511 + 1512 + return keep_regs; 1513 + } 1514 + 1515 + static void ftrace_remove_tramp(struct ftrace_ops *ops, 1516 + struct dyn_ftrace *rec) 1517 + { 1518 + struct ftrace_func_entry *entry; 1519 + 1520 + entry = ftrace_lookup_ip(ops->tramp_hash, rec->ip); 1521 + if (!entry) 1522 + return; 1523 + 1524 + /* 1525 + * The tramp_hash entry will be removed at time 1526 + * of update. 1527 + */ 1528 + ops->nr_trampolines--; 1529 + rec->flags &= ~FTRACE_FL_TRAMP; 1530 + } 1531 + 1532 + static void ftrace_clear_tramps(struct dyn_ftrace *rec) 1533 + { 1534 + struct ftrace_ops *op; 1535 + 1536 + do_for_each_ftrace_op(op, ftrace_ops_list) { 1537 + if (op->nr_trampolines) 1538 + ftrace_remove_tramp(op, rec); 1539 + } while_for_each_ftrace_op(op); 1540 + } 1541 + 1483 1542 static void __ftrace_hash_rec_update(struct ftrace_ops *ops, 1484 1543 int filter_hash, 1485 1544 bool inc) ··· 1607 1572 1608 1573 if (inc) { 1609 1574 rec->flags++; 1610 - if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == FTRACE_REF_MAX)) 1575 + if (FTRACE_WARN_ON(ftrace_rec_count(rec) == FTRACE_REF_MAX)) 1611 1576 return; 1577 + 1578 + /* 1579 + * If there's only a single callback registered to a 1580 + * function, and the ops has a trampoline registered 1581 + * for it, then we can call it directly. 1582 + */ 1583 + if (ftrace_rec_count(rec) == 1 && ops->trampoline) { 1584 + rec->flags |= FTRACE_FL_TRAMP; 1585 + ops->nr_trampolines++; 1586 + } else { 1587 + /* 1588 + * If we are adding another function callback 1589 + * to this function, and the previous had a 1590 + * trampoline used, then we need to go back to 1591 + * the default trampoline. 1592 + */ 1593 + rec->flags &= ~FTRACE_FL_TRAMP; 1594 + 1595 + /* remove trampolines from any ops for this rec */ 1596 + ftrace_clear_tramps(rec); 1597 + } 1598 + 1612 1599 /* 1613 1600 * If any ops wants regs saved for this function 1614 1601 * then all ops will get saved regs. ··· 1638 1581 if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) 1639 1582 rec->flags |= FTRACE_FL_REGS; 1640 1583 } else { 1641 - if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == 0)) 1584 + if (FTRACE_WARN_ON(ftrace_rec_count(rec) == 0)) 1642 1585 return; 1643 1586 rec->flags--; 1587 + 1588 + if (ops->trampoline && !ftrace_rec_count(rec)) 1589 + ftrace_remove_tramp(ops, rec); 1590 + 1591 + /* 1592 + * If the rec had REGS enabled and the ops that is 1593 + * being removed had REGS set, then see if there is 1594 + * still any ops for this record that wants regs. 1595 + * If not, we can stop recording them. 1596 + */ 1597 + if (ftrace_rec_count(rec) > 0 && 1598 + rec->flags & FTRACE_FL_REGS && 1599 + ops->flags & FTRACE_OPS_FL_SAVE_REGS) { 1600 + if (!test_rec_ops_needs_regs(rec)) 1601 + rec->flags &= ~FTRACE_FL_REGS; 1602 + } 1603 + 1604 + /* 1605 + * flags will be cleared in ftrace_check_record() 1606 + * if rec count is zero. 1607 + */ 1644 1608 } 1645 1609 count++; 1646 1610 /* Shortcut, if we handled all records, we are done. */ ··· 1746 1668 * If we are disabling calls, then disable all records that 1747 1669 * are enabled. 1748 1670 */ 1749 - if (enable && (rec->flags & ~FTRACE_FL_MASK)) 1671 + if (enable && ftrace_rec_count(rec)) 1750 1672 flag = FTRACE_FL_ENABLED; 1751 1673 1752 1674 /* 1753 - * If enabling and the REGS flag does not match the REGS_EN, then 1754 - * do not ignore this record. Set flags to fail the compare against 1755 - * ENABLED. 1675 + * If enabling and the REGS flag does not match the REGS_EN, or 1676 + * the TRAMP flag doesn't match the TRAMP_EN, then do not ignore 1677 + * this record. Set flags to fail the compare against ENABLED. 1756 1678 */ 1757 - if (flag && 1758 - (!(rec->flags & FTRACE_FL_REGS) != !(rec->flags & FTRACE_FL_REGS_EN))) 1759 - flag |= FTRACE_FL_REGS; 1679 + if (flag) { 1680 + if (!(rec->flags & FTRACE_FL_REGS) != 1681 + !(rec->flags & FTRACE_FL_REGS_EN)) 1682 + flag |= FTRACE_FL_REGS; 1683 + 1684 + if (!(rec->flags & FTRACE_FL_TRAMP) != 1685 + !(rec->flags & FTRACE_FL_TRAMP_EN)) 1686 + flag |= FTRACE_FL_TRAMP; 1687 + } 1760 1688 1761 1689 /* If the state of this record hasn't changed, then do nothing */ 1762 1690 if ((rec->flags & FTRACE_FL_ENABLED) == flag) ··· 1780 1696 else 1781 1697 rec->flags &= ~FTRACE_FL_REGS_EN; 1782 1698 } 1699 + if (flag & FTRACE_FL_TRAMP) { 1700 + if (rec->flags & FTRACE_FL_TRAMP) 1701 + rec->flags |= FTRACE_FL_TRAMP_EN; 1702 + else 1703 + rec->flags &= ~FTRACE_FL_TRAMP_EN; 1704 + } 1783 1705 } 1784 1706 1785 1707 /* ··· 1794 1704 * Otherwise, 1795 1705 * return UPDATE_MODIFY_CALL to tell the caller to convert 1796 1706 * from the save regs, to a non-save regs function or 1797 - * vice versa. 1707 + * vice versa, or from a trampoline call. 1798 1708 */ 1799 1709 if (flag & FTRACE_FL_ENABLED) 1800 1710 return FTRACE_UPDATE_MAKE_CALL; ··· 1804 1714 1805 1715 if (update) { 1806 1716 /* If there's no more users, clear all flags */ 1807 - if (!(rec->flags & ~FTRACE_FL_MASK)) 1717 + if (!ftrace_rec_count(rec)) 1808 1718 rec->flags = 0; 1809 1719 else 1810 1720 /* Just disable the record (keep REGS state) */ ··· 1841 1751 return ftrace_check_record(rec, enable, 0); 1842 1752 } 1843 1753 1754 + static struct ftrace_ops * 1755 + ftrace_find_tramp_ops_curr(struct dyn_ftrace *rec) 1756 + { 1757 + struct ftrace_ops *op; 1758 + 1759 + /* Removed ops need to be tested first */ 1760 + if (removed_ops && removed_ops->tramp_hash) { 1761 + if (ftrace_lookup_ip(removed_ops->tramp_hash, rec->ip)) 1762 + return removed_ops; 1763 + } 1764 + 1765 + do_for_each_ftrace_op(op, ftrace_ops_list) { 1766 + if (!op->tramp_hash) 1767 + continue; 1768 + 1769 + if (ftrace_lookup_ip(op->tramp_hash, rec->ip)) 1770 + return op; 1771 + 1772 + } while_for_each_ftrace_op(op); 1773 + 1774 + return NULL; 1775 + } 1776 + 1777 + static struct ftrace_ops * 1778 + ftrace_find_tramp_ops_new(struct dyn_ftrace *rec) 1779 + { 1780 + struct ftrace_ops *op; 1781 + 1782 + do_for_each_ftrace_op(op, ftrace_ops_list) { 1783 + /* pass rec in as regs to have non-NULL val */ 1784 + if (ftrace_ops_test(op, rec->ip, rec)) 1785 + return op; 1786 + } while_for_each_ftrace_op(op); 1787 + 1788 + return NULL; 1789 + } 1790 + 1844 1791 /** 1845 1792 * ftrace_get_addr_new - Get the call address to set to 1846 1793 * @rec: The ftrace record descriptor ··· 1890 1763 */ 1891 1764 unsigned long ftrace_get_addr_new(struct dyn_ftrace *rec) 1892 1765 { 1766 + struct ftrace_ops *ops; 1767 + 1768 + /* Trampolines take precedence over regs */ 1769 + if (rec->flags & FTRACE_FL_TRAMP) { 1770 + ops = ftrace_find_tramp_ops_new(rec); 1771 + if (FTRACE_WARN_ON(!ops || !ops->trampoline)) { 1772 + pr_warning("Bad trampoline accounting at: %p (%pS)\n", 1773 + (void *)rec->ip, (void *)rec->ip); 1774 + /* Ftrace is shutting down, return anything */ 1775 + return (unsigned long)FTRACE_ADDR; 1776 + } 1777 + return ops->trampoline; 1778 + } 1779 + 1893 1780 if (rec->flags & FTRACE_FL_REGS) 1894 1781 return (unsigned long)FTRACE_REGS_ADDR; 1895 1782 else ··· 1922 1781 */ 1923 1782 unsigned long ftrace_get_addr_curr(struct dyn_ftrace *rec) 1924 1783 { 1784 + struct ftrace_ops *ops; 1785 + 1786 + /* Trampolines take precedence over regs */ 1787 + if (rec->flags & FTRACE_FL_TRAMP_EN) { 1788 + ops = ftrace_find_tramp_ops_curr(rec); 1789 + if (FTRACE_WARN_ON(!ops)) { 1790 + pr_warning("Bad trampoline accounting at: %p (%pS)\n", 1791 + (void *)rec->ip, (void *)rec->ip); 1792 + /* Ftrace is shutting down, return anything */ 1793 + return (unsigned long)FTRACE_ADDR; 1794 + } 1795 + return ops->trampoline; 1796 + } 1797 + 1925 1798 if (rec->flags & FTRACE_FL_REGS_EN) 1926 1799 return (unsigned long)FTRACE_REGS_ADDR; 1927 1800 else ··· 2178 2023 ftrace_run_stop_machine(command); 2179 2024 } 2180 2025 2026 + static int ftrace_save_ops_tramp_hash(struct ftrace_ops *ops) 2027 + { 2028 + struct ftrace_page *pg; 2029 + struct dyn_ftrace *rec; 2030 + int size, bits; 2031 + int ret; 2032 + 2033 + size = ops->nr_trampolines; 2034 + bits = 0; 2035 + /* 2036 + * Make the hash size about 1/2 the # found 2037 + */ 2038 + for (size /= 2; size; size >>= 1) 2039 + bits++; 2040 + 2041 + ops->tramp_hash = alloc_ftrace_hash(bits); 2042 + /* 2043 + * TODO: a failed allocation is going to screw up 2044 + * the accounting of what needs to be modified 2045 + * and not. For now, we kill ftrace if we fail 2046 + * to allocate here. But there are ways around this, 2047 + * but that will take a little more work. 2048 + */ 2049 + if (!ops->tramp_hash) 2050 + return -ENOMEM; 2051 + 2052 + do_for_each_ftrace_rec(pg, rec) { 2053 + if (ftrace_rec_count(rec) == 1 && 2054 + ftrace_ops_test(ops, rec->ip, rec)) { 2055 + 2056 + /* 2057 + * If another ops adds to a rec, the rec will 2058 + * lose its trampoline and never get it back 2059 + * until all ops are off of it. 2060 + */ 2061 + if (!(rec->flags & FTRACE_FL_TRAMP)) 2062 + continue; 2063 + 2064 + /* This record had better have a trampoline */ 2065 + if (FTRACE_WARN_ON(!(rec->flags & FTRACE_FL_TRAMP_EN))) 2066 + return -1; 2067 + 2068 + ret = add_hash_entry(ops->tramp_hash, rec->ip); 2069 + if (ret < 0) 2070 + return ret; 2071 + } 2072 + } while_for_each_ftrace_rec(); 2073 + 2074 + /* The number of recs in the hash must match nr_trampolines */ 2075 + FTRACE_WARN_ON(ops->tramp_hash->count != ops->nr_trampolines); 2076 + 2077 + return 0; 2078 + } 2079 + 2080 + static int ftrace_save_tramp_hashes(void) 2081 + { 2082 + struct ftrace_ops *op; 2083 + int ret; 2084 + 2085 + /* 2086 + * Now that any trampoline is being used, we need to save the 2087 + * hashes for the ops that have them. This allows the mapping 2088 + * back from the record to the ops that has the trampoline to 2089 + * know what code is being replaced. Modifying code must always 2090 + * verify what it is changing. 2091 + */ 2092 + do_for_each_ftrace_op(op, ftrace_ops_list) { 2093 + 2094 + /* The tramp_hash is recreated each time. */ 2095 + free_ftrace_hash(op->tramp_hash); 2096 + op->tramp_hash = NULL; 2097 + 2098 + if (op->nr_trampolines) { 2099 + ret = ftrace_save_ops_tramp_hash(op); 2100 + if (ret) 2101 + return ret; 2102 + } 2103 + 2104 + } while_for_each_ftrace_op(op); 2105 + 2106 + return 0; 2107 + } 2108 + 2181 2109 static void ftrace_run_update_code(int command) 2182 2110 { 2183 2111 int ret; ··· 2269 2031 FTRACE_WARN_ON(ret); 2270 2032 if (ret) 2271 2033 return; 2272 - /* 2273 - * Do not call function tracer while we update the code. 2274 - * We are in stop machine. 2275 - */ 2276 - function_trace_stop++; 2277 2034 2278 2035 /* 2279 2036 * By default we use stop_machine() to modify the code. ··· 2278 2045 */ 2279 2046 arch_ftrace_update_code(command); 2280 2047 2281 - function_trace_stop--; 2282 - 2283 2048 ret = ftrace_arch_code_modify_post_process(); 2049 + FTRACE_WARN_ON(ret); 2050 + 2051 + ret = ftrace_save_tramp_hashes(); 2284 2052 FTRACE_WARN_ON(ret); 2285 2053 } 2286 2054 2287 2055 static ftrace_func_t saved_ftrace_func; 2288 2056 static int ftrace_start_up; 2289 - static int global_start_up; 2290 2057 2291 2058 static void control_ops_free(struct ftrace_ops *ops) 2292 2059 { ··· 2350 2117 2351 2118 ftrace_hash_rec_disable(ops, 1); 2352 2119 2353 - if (!global_start_up) 2354 - ops->flags &= ~FTRACE_OPS_FL_ENABLED; 2120 + ops->flags &= ~FTRACE_OPS_FL_ENABLED; 2355 2121 2356 2122 command |= FTRACE_UPDATE_CALLS; 2357 2123 ··· 2371 2139 return 0; 2372 2140 } 2373 2141 2142 + /* 2143 + * If the ops uses a trampoline, then it needs to be 2144 + * tested first on update. 2145 + */ 2146 + removed_ops = ops; 2147 + 2374 2148 ftrace_run_update_code(command); 2149 + 2150 + removed_ops = NULL; 2375 2151 2376 2152 /* 2377 2153 * Dynamic ops may be freed, we must make sure that all ··· 2638 2398 return start_pg; 2639 2399 2640 2400 free_pages: 2641 - while (start_pg) { 2401 + pg = start_pg; 2402 + while (pg) { 2642 2403 order = get_count_order(pg->size / ENTRIES_PER_PAGE); 2643 2404 free_pages((unsigned long)pg->records, order); 2644 2405 start_pg = pg->next; ··· 2836 2595 * off, we can short cut and just print out that all 2837 2596 * functions are enabled. 2838 2597 */ 2839 - if (iter->flags & FTRACE_ITER_FILTER && 2840 - ftrace_hash_empty(ops->filter_hash)) { 2598 + if ((iter->flags & FTRACE_ITER_FILTER && 2599 + ftrace_hash_empty(ops->filter_hash)) || 2600 + (iter->flags & FTRACE_ITER_NOTRACE && 2601 + ftrace_hash_empty(ops->notrace_hash))) { 2841 2602 if (*pos > 0) 2842 2603 return t_hash_start(m, pos); 2843 2604 iter->flags |= FTRACE_ITER_PRINTALL; ··· 2884 2641 return t_hash_show(m, iter); 2885 2642 2886 2643 if (iter->flags & FTRACE_ITER_PRINTALL) { 2887 - seq_printf(m, "#### all functions enabled ####\n"); 2644 + if (iter->flags & FTRACE_ITER_NOTRACE) 2645 + seq_printf(m, "#### no functions disabled ####\n"); 2646 + else 2647 + seq_printf(m, "#### all functions enabled ####\n"); 2888 2648 return 0; 2889 2649 } 2890 2650 ··· 2897 2651 return 0; 2898 2652 2899 2653 seq_printf(m, "%ps", (void *)rec->ip); 2900 - if (iter->flags & FTRACE_ITER_ENABLED) 2654 + if (iter->flags & FTRACE_ITER_ENABLED) { 2901 2655 seq_printf(m, " (%ld)%s", 2902 - rec->flags & ~FTRACE_FL_MASK, 2903 - rec->flags & FTRACE_FL_REGS ? " R" : ""); 2656 + ftrace_rec_count(rec), 2657 + rec->flags & FTRACE_FL_REGS ? " R" : " "); 2658 + if (rec->flags & FTRACE_FL_TRAMP_EN) { 2659 + struct ftrace_ops *ops; 2660 + 2661 + ops = ftrace_find_tramp_ops_curr(rec); 2662 + if (ops && ops->trampoline) 2663 + seq_printf(m, "\ttramp: %pS", 2664 + (void *)ops->trampoline); 2665 + else 2666 + seq_printf(m, "\ttramp: ERROR!"); 2667 + } 2668 + } 2669 + 2904 2670 seq_printf(m, "\n"); 2905 2671 2906 2672 return 0; ··· 2958 2700 } 2959 2701 2960 2702 return iter ? 0 : -ENOMEM; 2961 - } 2962 - 2963 - static void ftrace_filter_reset(struct ftrace_hash *hash) 2964 - { 2965 - mutex_lock(&ftrace_lock); 2966 - ftrace_hash_clear(hash); 2967 - mutex_unlock(&ftrace_lock); 2968 2703 } 2969 2704 2970 2705 /** ··· 3009 2758 hash = ops->filter_hash; 3010 2759 3011 2760 if (file->f_mode & FMODE_WRITE) { 3012 - iter->hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, hash); 2761 + const int size_bits = FTRACE_HASH_DEFAULT_BITS; 2762 + 2763 + if (file->f_flags & O_TRUNC) 2764 + iter->hash = alloc_ftrace_hash(size_bits); 2765 + else 2766 + iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash); 2767 + 3013 2768 if (!iter->hash) { 3014 2769 trace_parser_put(&iter->parser); 3015 2770 kfree(iter); ··· 3023 2766 goto out_unlock; 3024 2767 } 3025 2768 } 3026 - 3027 - if ((file->f_mode & FMODE_WRITE) && 3028 - (file->f_flags & O_TRUNC)) 3029 - ftrace_filter_reset(iter->hash); 3030 2769 3031 2770 if (file->f_mode & FMODE_READ) { 3032 2771 iter->pg = ftrace_pages_start; ··· 3724 3471 else 3725 3472 orig_hash = &ops->notrace_hash; 3726 3473 3727 - hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash); 3474 + if (reset) 3475 + hash = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS); 3476 + else 3477 + hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash); 3478 + 3728 3479 if (!hash) { 3729 3480 ret = -ENOMEM; 3730 3481 goto out_regex_unlock; 3731 3482 } 3732 3483 3733 - if (reset) 3734 - ftrace_filter_reset(hash); 3735 3484 if (buf && !ftrace_match_records(hash, buf, len)) { 3736 3485 ret = -EINVAL; 3737 3486 goto out_regex_unlock; ··· 3885 3630 3886 3631 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 3887 3632 static char ftrace_graph_buf[FTRACE_FILTER_SIZE] __initdata; 3633 + static char ftrace_graph_notrace_buf[FTRACE_FILTER_SIZE] __initdata; 3888 3634 static int ftrace_set_func(unsigned long *array, int *idx, int size, char *buffer); 3889 3635 3890 3636 static int __init set_graph_function(char *str) ··· 3895 3639 } 3896 3640 __setup("ftrace_graph_filter=", set_graph_function); 3897 3641 3898 - static void __init set_ftrace_early_graph(char *buf) 3642 + static int __init set_graph_notrace_function(char *str) 3643 + { 3644 + strlcpy(ftrace_graph_notrace_buf, str, FTRACE_FILTER_SIZE); 3645 + return 1; 3646 + } 3647 + __setup("ftrace_graph_notrace=", set_graph_notrace_function); 3648 + 3649 + static void __init set_ftrace_early_graph(char *buf, int enable) 3899 3650 { 3900 3651 int ret; 3901 3652 char *func; 3653 + unsigned long *table = ftrace_graph_funcs; 3654 + int *count = &ftrace_graph_count; 3655 + 3656 + if (!enable) { 3657 + table = ftrace_graph_notrace_funcs; 3658 + count = &ftrace_graph_notrace_count; 3659 + } 3902 3660 3903 3661 while (buf) { 3904 3662 func = strsep(&buf, ","); 3905 3663 /* we allow only one expression at a time */ 3906 - ret = ftrace_set_func(ftrace_graph_funcs, &ftrace_graph_count, 3907 - FTRACE_GRAPH_MAX_FUNCS, func); 3664 + ret = ftrace_set_func(table, count, FTRACE_GRAPH_MAX_FUNCS, func); 3908 3665 if (ret) 3909 3666 printk(KERN_DEBUG "ftrace: function %s not " 3910 3667 "traceable\n", func); ··· 3946 3677 ftrace_set_early_filter(&global_ops, ftrace_notrace_buf, 0); 3947 3678 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 3948 3679 if (ftrace_graph_buf[0]) 3949 - set_ftrace_early_graph(ftrace_graph_buf); 3680 + set_ftrace_early_graph(ftrace_graph_buf, 1); 3681 + if (ftrace_graph_notrace_buf[0]) 3682 + set_ftrace_early_graph(ftrace_graph_notrace_buf, 0); 3950 3683 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 3951 3684 } 3952 3685 ··· 4090 3819 return 0; 4091 3820 4092 3821 if (ptr == (unsigned long *)1) { 4093 - seq_printf(m, "#### all functions enabled ####\n"); 3822 + struct ftrace_graph_data *fgd = m->private; 3823 + 3824 + if (fgd->table == ftrace_graph_funcs) 3825 + seq_printf(m, "#### all functions enabled ####\n"); 3826 + else 3827 + seq_printf(m, "#### no functions disabled ####\n"); 4094 3828 return 0; 4095 3829 } 4096 3830 ··· 4723 4447 struct ftrace_ops *op; 4724 4448 int bit; 4725 4449 4726 - if (function_trace_stop) 4727 - return; 4728 - 4729 4450 bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX); 4730 4451 if (bit < 0) 4731 4452 return; ··· 4734 4461 preempt_disable_notrace(); 4735 4462 do_for_each_ftrace_op(op, ftrace_ops_list) { 4736 4463 if (ftrace_ops_test(op, ip, regs)) { 4737 - if (WARN_ON(!op->func)) { 4738 - function_trace_stop = 1; 4739 - printk("op=%p %pS\n", op, op); 4464 + if (FTRACE_WARN_ON(!op->func)) { 4465 + pr_warn("op=%p %pS\n", op, op); 4740 4466 goto out; 4741 4467 } 4742 4468 op->func(ip, parent_ip, op, regs); ··· 5356 5084 /* Function graph doesn't use the .func field of global_ops */ 5357 5085 global_ops.flags |= FTRACE_OPS_FL_STUB; 5358 5086 5087 + #ifdef CONFIG_DYNAMIC_FTRACE 5088 + /* Optimize function graph calling (if implemented by arch) */ 5089 + if (FTRACE_GRAPH_TRAMP_ADDR != 0) 5090 + global_ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR; 5091 + #endif 5092 + 5359 5093 ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET); 5360 5094 5361 5095 out: ··· 5382 5104 __ftrace_graph_entry = ftrace_graph_entry_stub; 5383 5105 ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET); 5384 5106 global_ops.flags &= ~FTRACE_OPS_FL_STUB; 5107 + #ifdef CONFIG_DYNAMIC_FTRACE 5108 + if (FTRACE_GRAPH_TRAMP_ADDR != 0) 5109 + global_ops.trampoline = 0; 5110 + #endif 5385 5111 unregister_pm_notifier(&ftrace_suspend_notifier); 5386 5112 unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL); 5387 5113 ··· 5464 5182 barrier(); 5465 5183 5466 5184 kfree(ret_stack); 5467 - } 5468 - 5469 - void ftrace_graph_stop(void) 5470 - { 5471 - ftrace_stop(); 5472 5185 } 5473 5186 #endif
+5 -21
kernel/trace/ring_buffer.c
··· 1689 1689 if (!cpu_buffer->nr_pages_to_update) 1690 1690 continue; 1691 1691 1692 - /* The update must run on the CPU that is being updated. */ 1693 - preempt_disable(); 1694 - if (cpu == smp_processor_id() || !cpu_online(cpu)) { 1692 + /* Can't run something on an offline CPU. */ 1693 + if (!cpu_online(cpu)) { 1695 1694 rb_update_pages(cpu_buffer); 1696 1695 cpu_buffer->nr_pages_to_update = 0; 1697 1696 } else { 1698 - /* 1699 - * Can not disable preemption for schedule_work_on() 1700 - * on PREEMPT_RT. 1701 - */ 1702 - preempt_enable(); 1703 1697 schedule_work_on(cpu, 1704 1698 &cpu_buffer->update_pages_work); 1705 - preempt_disable(); 1706 1699 } 1707 - preempt_enable(); 1708 1700 } 1709 1701 1710 1702 /* wait for all the updates to complete */ ··· 1734 1742 1735 1743 get_online_cpus(); 1736 1744 1737 - preempt_disable(); 1738 - /* The update must run on the CPU that is being updated. */ 1739 - if (cpu_id == smp_processor_id() || !cpu_online(cpu_id)) 1745 + /* Can't run something on an offline CPU. */ 1746 + if (!cpu_online(cpu_id)) 1740 1747 rb_update_pages(cpu_buffer); 1741 1748 else { 1742 - /* 1743 - * Can not disable preemption for schedule_work_on() 1744 - * on PREEMPT_RT. 1745 - */ 1746 - preempt_enable(); 1747 1749 schedule_work_on(cpu_id, 1748 1750 &cpu_buffer->update_pages_work); 1749 1751 wait_for_completion(&cpu_buffer->update_done); 1750 - preempt_disable(); 1751 1752 } 1752 - preempt_enable(); 1753 1753 1754 1754 cpu_buffer->nr_pages_to_update = 0; 1755 1755 put_online_cpus(); ··· 3759 3775 if (rb_per_cpu_empty(cpu_buffer)) 3760 3776 return NULL; 3761 3777 3762 - if (iter->head >= local_read(&iter->head_page->page->commit)) { 3778 + if (iter->head >= rb_page_size(iter->head_page)) { 3763 3779 rb_inc_iter(iter); 3764 3780 goto again; 3765 3781 }
+61 -35
kernel/trace/trace.c
··· 937 937 return ret; 938 938 } 939 939 940 - ssize_t trace_seq_to_user(struct trace_seq *s, char __user *ubuf, size_t cnt) 941 - { 942 - int len; 943 - int ret; 944 - 945 - if (!cnt) 946 - return 0; 947 - 948 - if (s->len <= s->readpos) 949 - return -EBUSY; 950 - 951 - len = s->len - s->readpos; 952 - if (cnt > len) 953 - cnt = len; 954 - ret = copy_to_user(ubuf, s->buffer + s->readpos, cnt); 955 - if (ret == cnt) 956 - return -EFAULT; 957 - 958 - cnt -= ret; 959 - 960 - s->readpos += cnt; 961 - return cnt; 962 - } 963 - 964 940 static ssize_t trace_seq_to_buffer(struct trace_seq *s, void *buf, size_t cnt) 965 941 { 966 942 int len; ··· 3675 3699 #endif 3676 3700 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 3677 3701 " set_graph_function\t- Trace the nested calls of a function (function_graph)\n" 3702 + " set_graph_notrace\t- Do not trace the nested calls of a function (function_graph)\n" 3678 3703 " max_graph_depth\t- Trace a limited depth of nested calls (0 is unlimited)\n" 3679 3704 #endif 3680 3705 #ifdef CONFIG_TRACER_SNAPSHOT ··· 4215 4238 } 4216 4239 4217 4240 static ssize_t 4218 - tracing_max_lat_read(struct file *filp, char __user *ubuf, 4219 - size_t cnt, loff_t *ppos) 4241 + tracing_nsecs_read(unsigned long *ptr, char __user *ubuf, 4242 + size_t cnt, loff_t *ppos) 4220 4243 { 4221 - unsigned long *ptr = filp->private_data; 4222 4244 char buf[64]; 4223 4245 int r; 4224 4246 ··· 4229 4253 } 4230 4254 4231 4255 static ssize_t 4232 - tracing_max_lat_write(struct file *filp, const char __user *ubuf, 4233 - size_t cnt, loff_t *ppos) 4256 + tracing_nsecs_write(unsigned long *ptr, const char __user *ubuf, 4257 + size_t cnt, loff_t *ppos) 4234 4258 { 4235 - unsigned long *ptr = filp->private_data; 4236 4259 unsigned long val; 4237 4260 int ret; 4238 4261 ··· 4242 4267 *ptr = val * 1000; 4243 4268 4244 4269 return cnt; 4270 + } 4271 + 4272 + static ssize_t 4273 + tracing_thresh_read(struct file *filp, char __user *ubuf, 4274 + size_t cnt, loff_t *ppos) 4275 + { 4276 + return tracing_nsecs_read(&tracing_thresh, ubuf, cnt, ppos); 4277 + } 4278 + 4279 + static ssize_t 4280 + tracing_thresh_write(struct file *filp, const char __user *ubuf, 4281 + size_t cnt, loff_t *ppos) 4282 + { 4283 + struct trace_array *tr = filp->private_data; 4284 + int ret; 4285 + 4286 + mutex_lock(&trace_types_lock); 4287 + ret = tracing_nsecs_write(&tracing_thresh, ubuf, cnt, ppos); 4288 + if (ret < 0) 4289 + goto out; 4290 + 4291 + if (tr->current_trace->update_thresh) { 4292 + ret = tr->current_trace->update_thresh(tr); 4293 + if (ret < 0) 4294 + goto out; 4295 + } 4296 + 4297 + ret = cnt; 4298 + out: 4299 + mutex_unlock(&trace_types_lock); 4300 + 4301 + return ret; 4302 + } 4303 + 4304 + static ssize_t 4305 + tracing_max_lat_read(struct file *filp, char __user *ubuf, 4306 + size_t cnt, loff_t *ppos) 4307 + { 4308 + return tracing_nsecs_read(filp->private_data, ubuf, cnt, ppos); 4309 + } 4310 + 4311 + static ssize_t 4312 + tracing_max_lat_write(struct file *filp, const char __user *ubuf, 4313 + size_t cnt, loff_t *ppos) 4314 + { 4315 + return tracing_nsecs_write(filp->private_data, ubuf, cnt, ppos); 4245 4316 } 4246 4317 4247 4318 static int tracing_open_pipe(struct inode *inode, struct file *filp) ··· 5190 5169 5191 5170 #endif /* CONFIG_TRACER_SNAPSHOT */ 5192 5171 5172 + 5173 + static const struct file_operations tracing_thresh_fops = { 5174 + .open = tracing_open_generic, 5175 + .read = tracing_thresh_read, 5176 + .write = tracing_thresh_write, 5177 + .llseek = generic_file_llseek, 5178 + }; 5193 5179 5194 5180 static const struct file_operations tracing_max_lat_fops = { 5195 5181 .open = tracing_open_generic, ··· 6135 6107 if (!topts) 6136 6108 return; 6137 6109 6138 - for (cnt = 0; topts[cnt].opt; cnt++) { 6139 - if (topts[cnt].entry) 6140 - debugfs_remove(topts[cnt].entry); 6141 - } 6110 + for (cnt = 0; topts[cnt].opt; cnt++) 6111 + debugfs_remove(topts[cnt].entry); 6142 6112 6143 6113 kfree(topts); 6144 6114 } ··· 6559 6533 init_tracer_debugfs(&global_trace, d_tracer); 6560 6534 6561 6535 trace_create_file("tracing_thresh", 0644, d_tracer, 6562 - &tracing_thresh, &tracing_max_lat_fops); 6536 + &global_trace, &tracing_thresh_fops); 6563 6537 6564 6538 trace_create_file("README", 0444, d_tracer, 6565 6539 NULL, &tracing_readme_fops);
+2
kernel/trace/trace.h
··· 339 339 * @reset: called when one switches to another tracer 340 340 * @start: called when tracing is unpaused (echo 1 > tracing_enabled) 341 341 * @stop: called when tracing is paused (echo 0 > tracing_enabled) 342 + * @update_thresh: called when tracing_thresh is updated 342 343 * @open: called when the trace file is opened 343 344 * @pipe_open: called when the trace_pipe file is opened 344 345 * @close: called when the trace file is released ··· 358 357 void (*reset)(struct trace_array *tr); 359 358 void (*start)(struct trace_array *tr); 360 359 void (*stop)(struct trace_array *tr); 360 + int (*update_thresh)(struct trace_array *tr); 361 361 void (*open)(struct trace_iterator *iter); 362 362 void (*pipe_open)(struct trace_iterator *iter); 363 363 void (*close)(struct trace_iterator *iter);
+27 -29
kernel/trace/trace_events.c
··· 8 8 * 9 9 */ 10 10 11 + #define pr_fmt(fmt) fmt 12 + 11 13 #include <linux/workqueue.h> 12 14 #include <linux/spinlock.h> 13 15 #include <linux/kthread.h> ··· 1493 1491 1494 1492 dir->entry = debugfs_create_dir(name, parent); 1495 1493 if (!dir->entry) { 1496 - pr_warning("Failed to create system directory %s\n", name); 1494 + pr_warn("Failed to create system directory %s\n", name); 1497 1495 __put_system(system); 1498 1496 goto out_free; 1499 1497 } ··· 1509 1507 if (!entry) { 1510 1508 kfree(system->filter); 1511 1509 system->filter = NULL; 1512 - pr_warning("Could not create debugfs '%s/filter' entry\n", name); 1510 + pr_warn("Could not create debugfs '%s/filter' entry\n", name); 1513 1511 } 1514 1512 1515 1513 trace_create_file("enable", 0644, dir->entry, dir, ··· 1524 1522 out_fail: 1525 1523 /* Only print this message if failed on memory allocation */ 1526 1524 if (!dir || !system) 1527 - pr_warning("No memory to create event subsystem %s\n", 1528 - name); 1525 + pr_warn("No memory to create event subsystem %s\n", name); 1529 1526 return NULL; 1530 1527 } 1531 1528 ··· 1552 1551 name = ftrace_event_name(call); 1553 1552 file->dir = debugfs_create_dir(name, d_events); 1554 1553 if (!file->dir) { 1555 - pr_warning("Could not create debugfs '%s' directory\n", 1556 - name); 1554 + pr_warn("Could not create debugfs '%s' directory\n", name); 1557 1555 return -1; 1558 1556 } 1559 1557 ··· 1575 1575 if (list_empty(head)) { 1576 1576 ret = call->class->define_fields(call); 1577 1577 if (ret < 0) { 1578 - pr_warning("Could not initialize trace point" 1579 - " events/%s\n", name); 1578 + pr_warn("Could not initialize trace point events/%s\n", 1579 + name); 1580 1580 return -1; 1581 1581 } 1582 1582 } ··· 1649 1649 if (call->class->raw_init) { 1650 1650 ret = call->class->raw_init(call); 1651 1651 if (ret < 0 && ret != -ENOSYS) 1652 - pr_warn("Could not initialize trace events/%s\n", 1653 - name); 1652 + pr_warn("Could not initialize trace events/%s\n", name); 1654 1653 } 1655 1654 1656 1655 return ret; ··· 1894 1895 list_for_each_entry(call, &ftrace_events, list) { 1895 1896 ret = __trace_add_new_event(call, tr); 1896 1897 if (ret < 0) 1897 - pr_warning("Could not create directory for event %s\n", 1898 - ftrace_event_name(call)); 1898 + pr_warn("Could not create directory for event %s\n", 1899 + ftrace_event_name(call)); 1899 1900 } 1900 1901 } 1901 1902 ··· 2207 2208 list_for_each_entry(file, &tr->events, list) { 2208 2209 ret = event_create_dir(tr->event_dir, file); 2209 2210 if (ret < 0) 2210 - pr_warning("Could not create directory for event %s\n", 2211 - ftrace_event_name(file->event_call)); 2211 + pr_warn("Could not create directory for event %s\n", 2212 + ftrace_event_name(file->event_call)); 2212 2213 } 2213 2214 } 2214 2215 ··· 2231 2232 2232 2233 ret = __trace_early_add_new_event(call, tr); 2233 2234 if (ret < 0) 2234 - pr_warning("Could not create early event %s\n", 2235 - ftrace_event_name(call)); 2235 + pr_warn("Could not create early event %s\n", 2236 + ftrace_event_name(call)); 2236 2237 } 2237 2238 } 2238 2239 ··· 2279 2280 entry = debugfs_create_file("set_event", 0644, parent, 2280 2281 tr, &ftrace_set_event_fops); 2281 2282 if (!entry) { 2282 - pr_warning("Could not create debugfs 'set_event' entry\n"); 2283 + pr_warn("Could not create debugfs 'set_event' entry\n"); 2283 2284 return -ENOMEM; 2284 2285 } 2285 2286 2286 2287 d_events = debugfs_create_dir("events", parent); 2287 2288 if (!d_events) { 2288 - pr_warning("Could not create debugfs 'events' directory\n"); 2289 + pr_warn("Could not create debugfs 'events' directory\n"); 2289 2290 return -ENOMEM; 2290 2291 } 2291 2292 ··· 2461 2462 entry = debugfs_create_file("available_events", 0444, d_tracer, 2462 2463 tr, &ftrace_avail_fops); 2463 2464 if (!entry) 2464 - pr_warning("Could not create debugfs " 2465 - "'available_events' entry\n"); 2465 + pr_warn("Could not create debugfs 'available_events' entry\n"); 2466 2466 2467 2467 if (trace_define_common_fields()) 2468 - pr_warning("tracing: Failed to allocate common fields"); 2468 + pr_warn("tracing: Failed to allocate common fields"); 2469 2469 2470 2470 ret = early_event_add_tracer(d_tracer, tr); 2471 2471 if (ret) ··· 2473 2475 #ifdef CONFIG_MODULES 2474 2476 ret = register_module_notifier(&trace_module_nb); 2475 2477 if (ret) 2476 - pr_warning("Failed to register trace events module notifier\n"); 2478 + pr_warn("Failed to register trace events module notifier\n"); 2477 2479 #endif 2478 2480 return 0; 2479 2481 } ··· 2577 2579 * it and the self test should not be on. 2578 2580 */ 2579 2581 if (file->flags & FTRACE_EVENT_FL_ENABLED) { 2580 - pr_warning("Enabled event during self test!\n"); 2582 + pr_warn("Enabled event during self test!\n"); 2581 2583 WARN_ON_ONCE(1); 2582 2584 continue; 2583 2585 } ··· 2605 2607 2606 2608 ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 1); 2607 2609 if (WARN_ON_ONCE(ret)) { 2608 - pr_warning("error enabling system %s\n", 2609 - system->name); 2610 + pr_warn("error enabling system %s\n", 2611 + system->name); 2610 2612 continue; 2611 2613 } 2612 2614 ··· 2614 2616 2615 2617 ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 0); 2616 2618 if (WARN_ON_ONCE(ret)) { 2617 - pr_warning("error disabling system %s\n", 2618 - system->name); 2619 + pr_warn("error disabling system %s\n", 2620 + system->name); 2619 2621 continue; 2620 2622 } 2621 2623 ··· 2629 2631 2630 2632 ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 1); 2631 2633 if (WARN_ON_ONCE(ret)) { 2632 - pr_warning("error enabling all events\n"); 2634 + pr_warn("error enabling all events\n"); 2633 2635 return; 2634 2636 } 2635 2637 ··· 2638 2640 /* reset sysname */ 2639 2641 ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0); 2640 2642 if (WARN_ON_ONCE(ret)) { 2641 - pr_warning("error disabling all events\n"); 2643 + pr_warn("error disabling all events\n"); 2642 2644 return; 2643 2645 } 2644 2646
+40 -3
kernel/trace/trace_functions_graph.c
··· 15 15 #include "trace.h" 16 16 #include "trace_output.h" 17 17 18 + static bool kill_ftrace_graph; 19 + 20 + /** 21 + * ftrace_graph_is_dead - returns true if ftrace_graph_stop() was called 22 + * 23 + * ftrace_graph_stop() is called when a severe error is detected in 24 + * the function graph tracing. This function is called by the critical 25 + * paths of function graph to keep those paths from doing any more harm. 26 + */ 27 + bool ftrace_graph_is_dead(void) 28 + { 29 + return kill_ftrace_graph; 30 + } 31 + 32 + /** 33 + * ftrace_graph_stop - set to permanently disable function graph tracincg 34 + * 35 + * In case of an error int function graph tracing, this is called 36 + * to try to keep function graph tracing from causing any more harm. 37 + * Usually this is pretty severe and this is called to try to at least 38 + * get a warning out to the user. 39 + */ 40 + void ftrace_graph_stop(void) 41 + { 42 + kill_ftrace_graph = true; 43 + } 44 + 18 45 /* When set, irq functions will be ignored */ 19 46 static int ftrace_graph_skip_irqs; 20 47 ··· 118 91 { 119 92 unsigned long long calltime; 120 93 int index; 94 + 95 + if (unlikely(ftrace_graph_is_dead())) 96 + return -EBUSY; 121 97 122 98 if (!current->ret_stack) 123 99 return -EBUSY; ··· 353 323 return ret; 354 324 } 355 325 356 - int trace_graph_thresh_entry(struct ftrace_graph_ent *trace) 326 + static int trace_graph_thresh_entry(struct ftrace_graph_ent *trace) 357 327 { 358 328 if (tracing_thresh) 359 329 return 1; ··· 442 412 smp_mb(); 443 413 } 444 414 445 - void trace_graph_thresh_return(struct ftrace_graph_ret *trace) 415 + static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) 446 416 { 447 417 if (tracing_thresh && 448 418 (trace->rettime - trace->calltime < tracing_thresh)) ··· 473 443 { 474 444 tracing_stop_cmdline_record(); 475 445 unregister_ftrace_graph(); 446 + } 447 + 448 + static int graph_trace_update_thresh(struct trace_array *tr) 449 + { 450 + graph_trace_reset(tr); 451 + return graph_trace_init(tr); 476 452 } 477 453 478 454 static int max_bytes_for_cpu; ··· 1435 1399 seq_printf(s, " | | | |\n"); 1436 1400 } 1437 1401 1438 - void print_graph_headers(struct seq_file *s) 1402 + static void print_graph_headers(struct seq_file *s) 1439 1403 { 1440 1404 print_graph_headers_flags(s, tracer_flags.val); 1441 1405 } ··· 1531 1495 1532 1496 static struct tracer graph_trace __tracer_data = { 1533 1497 .name = "function_graph", 1498 + .update_thresh = graph_trace_update_thresh, 1534 1499 .open = graph_trace_open, 1535 1500 .pipe_open = graph_trace_open, 1536 1501 .close = graph_trace_close,
+7 -275
kernel/trace/trace_output.c
··· 20 20 21 21 static int next_event_type = __TRACE_LAST_TYPE + 1; 22 22 23 - int trace_print_seq(struct seq_file *m, struct trace_seq *s) 24 - { 25 - int len = s->len >= PAGE_SIZE ? PAGE_SIZE - 1 : s->len; 26 - int ret; 27 - 28 - ret = seq_write(m, s->buffer, len); 29 - 30 - /* 31 - * Only reset this buffer if we successfully wrote to the 32 - * seq_file buffer. 33 - */ 34 - if (!ret) 35 - trace_seq_init(s); 36 - 37 - return ret; 38 - } 39 - 40 23 enum print_line_t trace_print_bputs_msg_only(struct trace_iterator *iter) 41 24 { 42 25 struct trace_seq *s = &iter->seq; ··· 68 85 return TRACE_TYPE_HANDLED; 69 86 } 70 87 71 - /** 72 - * trace_seq_printf - sequence printing of trace information 73 - * @s: trace sequence descriptor 74 - * @fmt: printf format string 75 - * 76 - * It returns 0 if the trace oversizes the buffer's free 77 - * space, 1 otherwise. 78 - * 79 - * The tracer may use either sequence operations or its own 80 - * copy to user routines. To simplify formating of a trace 81 - * trace_seq_printf is used to store strings into a special 82 - * buffer (@s). Then the output may be either used by 83 - * the sequencer or pulled into another buffer. 84 - */ 85 - int 86 - trace_seq_printf(struct trace_seq *s, const char *fmt, ...) 87 - { 88 - int len = (PAGE_SIZE - 1) - s->len; 89 - va_list ap; 90 - int ret; 91 - 92 - if (s->full || !len) 93 - return 0; 94 - 95 - va_start(ap, fmt); 96 - ret = vsnprintf(s->buffer + s->len, len, fmt, ap); 97 - va_end(ap); 98 - 99 - /* If we can't write it all, don't bother writing anything */ 100 - if (ret >= len) { 101 - s->full = 1; 102 - return 0; 103 - } 104 - 105 - s->len += ret; 106 - 107 - return 1; 108 - } 109 - EXPORT_SYMBOL_GPL(trace_seq_printf); 110 - 111 - /** 112 - * trace_seq_bitmask - put a list of longs as a bitmask print output 113 - * @s: trace sequence descriptor 114 - * @maskp: points to an array of unsigned longs that represent a bitmask 115 - * @nmaskbits: The number of bits that are valid in @maskp 116 - * 117 - * It returns 0 if the trace oversizes the buffer's free 118 - * space, 1 otherwise. 119 - * 120 - * Writes a ASCII representation of a bitmask string into @s. 121 - */ 122 - int 123 - trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp, 124 - int nmaskbits) 125 - { 126 - int len = (PAGE_SIZE - 1) - s->len; 127 - int ret; 128 - 129 - if (s->full || !len) 130 - return 0; 131 - 132 - ret = bitmap_scnprintf(s->buffer, len, maskp, nmaskbits); 133 - s->len += ret; 134 - 135 - return 1; 136 - } 137 - EXPORT_SYMBOL_GPL(trace_seq_bitmask); 138 - 139 - /** 140 - * trace_seq_vprintf - sequence printing of trace information 141 - * @s: trace sequence descriptor 142 - * @fmt: printf format string 143 - * 144 - * The tracer may use either sequence operations or its own 145 - * copy to user routines. To simplify formating of a trace 146 - * trace_seq_printf is used to store strings into a special 147 - * buffer (@s). Then the output may be either used by 148 - * the sequencer or pulled into another buffer. 149 - */ 150 - int 151 - trace_seq_vprintf(struct trace_seq *s, const char *fmt, va_list args) 152 - { 153 - int len = (PAGE_SIZE - 1) - s->len; 154 - int ret; 155 - 156 - if (s->full || !len) 157 - return 0; 158 - 159 - ret = vsnprintf(s->buffer + s->len, len, fmt, args); 160 - 161 - /* If we can't write it all, don't bother writing anything */ 162 - if (ret >= len) { 163 - s->full = 1; 164 - return 0; 165 - } 166 - 167 - s->len += ret; 168 - 169 - return len; 170 - } 171 - EXPORT_SYMBOL_GPL(trace_seq_vprintf); 172 - 173 - int trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary) 174 - { 175 - int len = (PAGE_SIZE - 1) - s->len; 176 - int ret; 177 - 178 - if (s->full || !len) 179 - return 0; 180 - 181 - ret = bstr_printf(s->buffer + s->len, len, fmt, binary); 182 - 183 - /* If we can't write it all, don't bother writing anything */ 184 - if (ret >= len) { 185 - s->full = 1; 186 - return 0; 187 - } 188 - 189 - s->len += ret; 190 - 191 - return len; 192 - } 193 - 194 - /** 195 - * trace_seq_puts - trace sequence printing of simple string 196 - * @s: trace sequence descriptor 197 - * @str: simple string to record 198 - * 199 - * The tracer may use either the sequence operations or its own 200 - * copy to user routines. This function records a simple string 201 - * into a special buffer (@s) for later retrieval by a sequencer 202 - * or other mechanism. 203 - */ 204 - int trace_seq_puts(struct trace_seq *s, const char *str) 205 - { 206 - int len = strlen(str); 207 - 208 - if (s->full) 209 - return 0; 210 - 211 - if (len > ((PAGE_SIZE - 1) - s->len)) { 212 - s->full = 1; 213 - return 0; 214 - } 215 - 216 - memcpy(s->buffer + s->len, str, len); 217 - s->len += len; 218 - 219 - return len; 220 - } 221 - 222 - int trace_seq_putc(struct trace_seq *s, unsigned char c) 223 - { 224 - if (s->full) 225 - return 0; 226 - 227 - if (s->len >= (PAGE_SIZE - 1)) { 228 - s->full = 1; 229 - return 0; 230 - } 231 - 232 - s->buffer[s->len++] = c; 233 - 234 - return 1; 235 - } 236 - EXPORT_SYMBOL(trace_seq_putc); 237 - 238 - int trace_seq_putmem(struct trace_seq *s, const void *mem, size_t len) 239 - { 240 - if (s->full) 241 - return 0; 242 - 243 - if (len > ((PAGE_SIZE - 1) - s->len)) { 244 - s->full = 1; 245 - return 0; 246 - } 247 - 248 - memcpy(s->buffer + s->len, mem, len); 249 - s->len += len; 250 - 251 - return len; 252 - } 253 - 254 - int trace_seq_putmem_hex(struct trace_seq *s, const void *mem, size_t len) 255 - { 256 - unsigned char hex[HEX_CHARS]; 257 - const unsigned char *data = mem; 258 - int i, j; 259 - 260 - if (s->full) 261 - return 0; 262 - 263 - #ifdef __BIG_ENDIAN 264 - for (i = 0, j = 0; i < len; i++) { 265 - #else 266 - for (i = len-1, j = 0; i >= 0; i--) { 267 - #endif 268 - hex[j++] = hex_asc_hi(data[i]); 269 - hex[j++] = hex_asc_lo(data[i]); 270 - } 271 - hex[j++] = ' '; 272 - 273 - return trace_seq_putmem(s, hex, j); 274 - } 275 - 276 - void *trace_seq_reserve(struct trace_seq *s, size_t len) 277 - { 278 - void *ret; 279 - 280 - if (s->full) 281 - return NULL; 282 - 283 - if (len > ((PAGE_SIZE - 1) - s->len)) { 284 - s->full = 1; 285 - return NULL; 286 - } 287 - 288 - ret = s->buffer + s->len; 289 - s->len += len; 290 - 291 - return ret; 292 - } 293 - 294 - int trace_seq_path(struct trace_seq *s, const struct path *path) 295 - { 296 - unsigned char *p; 297 - 298 - if (s->full) 299 - return 0; 300 - 301 - if (s->len >= (PAGE_SIZE - 1)) { 302 - s->full = 1; 303 - return 0; 304 - } 305 - 306 - p = d_path(path, s->buffer + s->len, PAGE_SIZE - s->len); 307 - if (!IS_ERR(p)) { 308 - p = mangle_path(s->buffer + s->len, p, "\n"); 309 - if (p) { 310 - s->len = p - s->buffer; 311 - return 1; 312 - } 313 - } else { 314 - s->buffer[s->len++] = '?'; 315 - return 1; 316 - } 317 - 318 - s->full = 1; 319 - return 0; 320 - } 321 - 322 88 const char * 323 89 ftrace_print_flags_seq(struct trace_seq *p, const char *delim, 324 90 unsigned long flags, ··· 75 343 { 76 344 unsigned long mask; 77 345 const char *str; 78 - const char *ret = p->buffer + p->len; 346 + const char *ret = trace_seq_buffer_ptr(p); 79 347 int i, first = 1; 80 348 81 349 for (i = 0; flag_array[i].name && flags; i++) { ··· 111 379 const struct trace_print_flags *symbol_array) 112 380 { 113 381 int i; 114 - const char *ret = p->buffer + p->len; 382 + const char *ret = trace_seq_buffer_ptr(p); 115 383 116 384 for (i = 0; symbol_array[i].name; i++) { 117 385 ··· 122 390 break; 123 391 } 124 392 125 - if (ret == (const char *)(p->buffer + p->len)) 393 + if (ret == (const char *)(trace_seq_buffer_ptr(p))) 126 394 trace_seq_printf(p, "0x%lx", val); 127 395 128 396 trace_seq_putc(p, 0); ··· 137 405 const struct trace_print_flags_u64 *symbol_array) 138 406 { 139 407 int i; 140 - const char *ret = p->buffer + p->len; 408 + const char *ret = trace_seq_buffer_ptr(p); 141 409 142 410 for (i = 0; symbol_array[i].name; i++) { 143 411 ··· 148 416 break; 149 417 } 150 418 151 - if (ret == (const char *)(p->buffer + p->len)) 419 + if (ret == (const char *)(trace_seq_buffer_ptr(p))) 152 420 trace_seq_printf(p, "0x%llx", val); 153 421 154 422 trace_seq_putc(p, 0); ··· 162 430 ftrace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr, 163 431 unsigned int bitmask_size) 164 432 { 165 - const char *ret = p->buffer + p->len; 433 + const char *ret = trace_seq_buffer_ptr(p); 166 434 167 435 trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8); 168 436 trace_seq_putc(p, 0); ··· 175 443 ftrace_print_hex_seq(struct trace_seq *p, const unsigned char *buf, int buf_len) 176 444 { 177 445 int i; 178 - const char *ret = p->buffer + p->len; 446 + const char *ret = trace_seq_buffer_ptr(p); 179 447 180 448 for (i = 0; i < buf_len; i++) 181 449 trace_seq_printf(p, "%s%2.2x", i == 0 ? "" : " ", buf[i]);
-4
kernel/trace/trace_output.h
··· 35 35 extern int __unregister_ftrace_event(struct trace_event *event); 36 36 extern struct rw_semaphore trace_event_sem; 37 37 38 - #define MAX_MEMHEX_BYTES 8 39 - #define HEX_CHARS (MAX_MEMHEX_BYTES*2 + 1) 40 - 41 38 #define SEQ_PUT_FIELD_RET(s, x) \ 42 39 do { \ 43 40 if (!trace_seq_putmem(s, &(x), sizeof(x))) \ ··· 43 46 44 47 #define SEQ_PUT_HEX_FIELD_RET(s, x) \ 45 48 do { \ 46 - BUILD_BUG_ON(sizeof(x) > MAX_MEMHEX_BYTES); \ 47 49 if (!trace_seq_putmem_hex(s, &(x), sizeof(x))) \ 48 50 return TRACE_TYPE_PARTIAL_LINE; \ 49 51 } while (0)
+428
kernel/trace/trace_seq.c
··· 1 + /* 2 + * trace_seq.c 3 + * 4 + * Copyright (C) 2008-2014 Red Hat Inc, Steven Rostedt <srostedt@redhat.com> 5 + * 6 + * The trace_seq is a handy tool that allows you to pass a descriptor around 7 + * to a buffer that other functions can write to. It is similar to the 8 + * seq_file functionality but has some differences. 9 + * 10 + * To use it, the trace_seq must be initialized with trace_seq_init(). 11 + * This will set up the counters within the descriptor. You can call 12 + * trace_seq_init() more than once to reset the trace_seq to start 13 + * from scratch. 14 + * 15 + * The buffer size is currently PAGE_SIZE, although it may become dynamic 16 + * in the future. 17 + * 18 + * A write to the buffer will either succed or fail. That is, unlike 19 + * sprintf() there will not be a partial write (well it may write into 20 + * the buffer but it wont update the pointers). This allows users to 21 + * try to write something into the trace_seq buffer and if it fails 22 + * they can flush it and try again. 23 + * 24 + */ 25 + #include <linux/uaccess.h> 26 + #include <linux/seq_file.h> 27 + #include <linux/trace_seq.h> 28 + 29 + /* How much buffer is left on the trace_seq? */ 30 + #define TRACE_SEQ_BUF_LEFT(s) ((PAGE_SIZE - 1) - (s)->len) 31 + 32 + /* How much buffer is written? */ 33 + #define TRACE_SEQ_BUF_USED(s) min((s)->len, (unsigned int)(PAGE_SIZE - 1)) 34 + 35 + /** 36 + * trace_print_seq - move the contents of trace_seq into a seq_file 37 + * @m: the seq_file descriptor that is the destination 38 + * @s: the trace_seq descriptor that is the source. 39 + * 40 + * Returns 0 on success and non zero on error. If it succeeds to 41 + * write to the seq_file it will reset the trace_seq, otherwise 42 + * it does not modify the trace_seq to let the caller try again. 43 + */ 44 + int trace_print_seq(struct seq_file *m, struct trace_seq *s) 45 + { 46 + unsigned int len = TRACE_SEQ_BUF_USED(s); 47 + int ret; 48 + 49 + ret = seq_write(m, s->buffer, len); 50 + 51 + /* 52 + * Only reset this buffer if we successfully wrote to the 53 + * seq_file buffer. This lets the caller try again or 54 + * do something else with the contents. 55 + */ 56 + if (!ret) 57 + trace_seq_init(s); 58 + 59 + return ret; 60 + } 61 + 62 + /** 63 + * trace_seq_printf - sequence printing of trace information 64 + * @s: trace sequence descriptor 65 + * @fmt: printf format string 66 + * 67 + * The tracer may use either sequence operations or its own 68 + * copy to user routines. To simplify formating of a trace 69 + * trace_seq_printf() is used to store strings into a special 70 + * buffer (@s). Then the output may be either used by 71 + * the sequencer or pulled into another buffer. 72 + * 73 + * Returns 1 if we successfully written all the contents to 74 + * the buffer. 75 + * Returns 0 if we the length to write is bigger than the 76 + * reserved buffer space. In this case, nothing gets written. 77 + */ 78 + int trace_seq_printf(struct trace_seq *s, const char *fmt, ...) 79 + { 80 + unsigned int len = TRACE_SEQ_BUF_LEFT(s); 81 + va_list ap; 82 + int ret; 83 + 84 + if (s->full || !len) 85 + return 0; 86 + 87 + va_start(ap, fmt); 88 + ret = vsnprintf(s->buffer + s->len, len, fmt, ap); 89 + va_end(ap); 90 + 91 + /* If we can't write it all, don't bother writing anything */ 92 + if (ret >= len) { 93 + s->full = 1; 94 + return 0; 95 + } 96 + 97 + s->len += ret; 98 + 99 + return 1; 100 + } 101 + EXPORT_SYMBOL_GPL(trace_seq_printf); 102 + 103 + /** 104 + * trace_seq_bitmask - write a bitmask array in its ASCII representation 105 + * @s: trace sequence descriptor 106 + * @maskp: points to an array of unsigned longs that represent a bitmask 107 + * @nmaskbits: The number of bits that are valid in @maskp 108 + * 109 + * Writes a ASCII representation of a bitmask string into @s. 110 + * 111 + * Returns 1 if we successfully written all the contents to 112 + * the buffer. 113 + * Returns 0 if we the length to write is bigger than the 114 + * reserved buffer space. In this case, nothing gets written. 115 + */ 116 + int trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp, 117 + int nmaskbits) 118 + { 119 + unsigned int len = TRACE_SEQ_BUF_LEFT(s); 120 + int ret; 121 + 122 + if (s->full || !len) 123 + return 0; 124 + 125 + ret = bitmap_scnprintf(s->buffer, len, maskp, nmaskbits); 126 + s->len += ret; 127 + 128 + return 1; 129 + } 130 + EXPORT_SYMBOL_GPL(trace_seq_bitmask); 131 + 132 + /** 133 + * trace_seq_vprintf - sequence printing of trace information 134 + * @s: trace sequence descriptor 135 + * @fmt: printf format string 136 + * 137 + * The tracer may use either sequence operations or its own 138 + * copy to user routines. To simplify formating of a trace 139 + * trace_seq_printf is used to store strings into a special 140 + * buffer (@s). Then the output may be either used by 141 + * the sequencer or pulled into another buffer. 142 + * 143 + * Returns how much it wrote to the buffer. 144 + */ 145 + int trace_seq_vprintf(struct trace_seq *s, const char *fmt, va_list args) 146 + { 147 + unsigned int len = TRACE_SEQ_BUF_LEFT(s); 148 + int ret; 149 + 150 + if (s->full || !len) 151 + return 0; 152 + 153 + ret = vsnprintf(s->buffer + s->len, len, fmt, args); 154 + 155 + /* If we can't write it all, don't bother writing anything */ 156 + if (ret >= len) { 157 + s->full = 1; 158 + return 0; 159 + } 160 + 161 + s->len += ret; 162 + 163 + return len; 164 + } 165 + EXPORT_SYMBOL_GPL(trace_seq_vprintf); 166 + 167 + /** 168 + * trace_seq_bprintf - Write the printf string from binary arguments 169 + * @s: trace sequence descriptor 170 + * @fmt: The format string for the @binary arguments 171 + * @binary: The binary arguments for @fmt. 172 + * 173 + * When recording in a fast path, a printf may be recorded with just 174 + * saving the format and the arguments as they were passed to the 175 + * function, instead of wasting cycles converting the arguments into 176 + * ASCII characters. Instead, the arguments are saved in a 32 bit 177 + * word array that is defined by the format string constraints. 178 + * 179 + * This function will take the format and the binary array and finish 180 + * the conversion into the ASCII string within the buffer. 181 + * 182 + * Returns how much it wrote to the buffer. 183 + */ 184 + int trace_seq_bprintf(struct trace_seq *s, const char *fmt, const u32 *binary) 185 + { 186 + unsigned int len = TRACE_SEQ_BUF_LEFT(s); 187 + int ret; 188 + 189 + if (s->full || !len) 190 + return 0; 191 + 192 + ret = bstr_printf(s->buffer + s->len, len, fmt, binary); 193 + 194 + /* If we can't write it all, don't bother writing anything */ 195 + if (ret >= len) { 196 + s->full = 1; 197 + return 0; 198 + } 199 + 200 + s->len += ret; 201 + 202 + return len; 203 + } 204 + EXPORT_SYMBOL_GPL(trace_seq_bprintf); 205 + 206 + /** 207 + * trace_seq_puts - trace sequence printing of simple string 208 + * @s: trace sequence descriptor 209 + * @str: simple string to record 210 + * 211 + * The tracer may use either the sequence operations or its own 212 + * copy to user routines. This function records a simple string 213 + * into a special buffer (@s) for later retrieval by a sequencer 214 + * or other mechanism. 215 + * 216 + * Returns how much it wrote to the buffer. 217 + */ 218 + int trace_seq_puts(struct trace_seq *s, const char *str) 219 + { 220 + unsigned int len = strlen(str); 221 + 222 + if (s->full) 223 + return 0; 224 + 225 + if (len > TRACE_SEQ_BUF_LEFT(s)) { 226 + s->full = 1; 227 + return 0; 228 + } 229 + 230 + memcpy(s->buffer + s->len, str, len); 231 + s->len += len; 232 + 233 + return len; 234 + } 235 + EXPORT_SYMBOL_GPL(trace_seq_puts); 236 + 237 + /** 238 + * trace_seq_putc - trace sequence printing of simple character 239 + * @s: trace sequence descriptor 240 + * @c: simple character to record 241 + * 242 + * The tracer may use either the sequence operations or its own 243 + * copy to user routines. This function records a simple charater 244 + * into a special buffer (@s) for later retrieval by a sequencer 245 + * or other mechanism. 246 + * 247 + * Returns how much it wrote to the buffer. 248 + */ 249 + int trace_seq_putc(struct trace_seq *s, unsigned char c) 250 + { 251 + if (s->full) 252 + return 0; 253 + 254 + if (TRACE_SEQ_BUF_LEFT(s) < 1) { 255 + s->full = 1; 256 + return 0; 257 + } 258 + 259 + s->buffer[s->len++] = c; 260 + 261 + return 1; 262 + } 263 + EXPORT_SYMBOL_GPL(trace_seq_putc); 264 + 265 + /** 266 + * trace_seq_putmem - write raw data into the trace_seq buffer 267 + * @s: trace sequence descriptor 268 + * @mem: The raw memory to copy into the buffer 269 + * @len: The length of the raw memory to copy (in bytes) 270 + * 271 + * There may be cases where raw memory needs to be written into the 272 + * buffer and a strcpy() would not work. Using this function allows 273 + * for such cases. 274 + * 275 + * Returns how much it wrote to the buffer. 276 + */ 277 + int trace_seq_putmem(struct trace_seq *s, const void *mem, unsigned int len) 278 + { 279 + if (s->full) 280 + return 0; 281 + 282 + if (len > TRACE_SEQ_BUF_LEFT(s)) { 283 + s->full = 1; 284 + return 0; 285 + } 286 + 287 + memcpy(s->buffer + s->len, mem, len); 288 + s->len += len; 289 + 290 + return len; 291 + } 292 + EXPORT_SYMBOL_GPL(trace_seq_putmem); 293 + 294 + #define MAX_MEMHEX_BYTES 8U 295 + #define HEX_CHARS (MAX_MEMHEX_BYTES*2 + 1) 296 + 297 + /** 298 + * trace_seq_putmem_hex - write raw memory into the buffer in ASCII hex 299 + * @s: trace sequence descriptor 300 + * @mem: The raw memory to write its hex ASCII representation of 301 + * @len: The length of the raw memory to copy (in bytes) 302 + * 303 + * This is similar to trace_seq_putmem() except instead of just copying the 304 + * raw memory into the buffer it writes its ASCII representation of it 305 + * in hex characters. 306 + * 307 + * Returns how much it wrote to the buffer. 308 + */ 309 + int trace_seq_putmem_hex(struct trace_seq *s, const void *mem, 310 + unsigned int len) 311 + { 312 + unsigned char hex[HEX_CHARS]; 313 + const unsigned char *data = mem; 314 + unsigned int start_len; 315 + int i, j; 316 + int cnt = 0; 317 + 318 + if (s->full) 319 + return 0; 320 + 321 + while (len) { 322 + start_len = min(len, HEX_CHARS - 1); 323 + #ifdef __BIG_ENDIAN 324 + for (i = 0, j = 0; i < start_len; i++) { 325 + #else 326 + for (i = start_len-1, j = 0; i >= 0; i--) { 327 + #endif 328 + hex[j++] = hex_asc_hi(data[i]); 329 + hex[j++] = hex_asc_lo(data[i]); 330 + } 331 + if (WARN_ON_ONCE(j == 0 || j/2 > len)) 332 + break; 333 + 334 + /* j increments twice per loop */ 335 + len -= j / 2; 336 + hex[j++] = ' '; 337 + 338 + cnt += trace_seq_putmem(s, hex, j); 339 + } 340 + return cnt; 341 + } 342 + EXPORT_SYMBOL_GPL(trace_seq_putmem_hex); 343 + 344 + /** 345 + * trace_seq_path - copy a path into the sequence buffer 346 + * @s: trace sequence descriptor 347 + * @path: path to write into the sequence buffer. 348 + * 349 + * Write a path name into the sequence buffer. 350 + * 351 + * Returns 1 if we successfully written all the contents to 352 + * the buffer. 353 + * Returns 0 if we the length to write is bigger than the 354 + * reserved buffer space. In this case, nothing gets written. 355 + */ 356 + int trace_seq_path(struct trace_seq *s, const struct path *path) 357 + { 358 + unsigned char *p; 359 + 360 + if (s->full) 361 + return 0; 362 + 363 + if (TRACE_SEQ_BUF_LEFT(s) < 1) { 364 + s->full = 1; 365 + return 0; 366 + } 367 + 368 + p = d_path(path, s->buffer + s->len, PAGE_SIZE - s->len); 369 + if (!IS_ERR(p)) { 370 + p = mangle_path(s->buffer + s->len, p, "\n"); 371 + if (p) { 372 + s->len = p - s->buffer; 373 + return 1; 374 + } 375 + } else { 376 + s->buffer[s->len++] = '?'; 377 + return 1; 378 + } 379 + 380 + s->full = 1; 381 + return 0; 382 + } 383 + EXPORT_SYMBOL_GPL(trace_seq_path); 384 + 385 + /** 386 + * trace_seq_to_user - copy the squence buffer to user space 387 + * @s: trace sequence descriptor 388 + * @ubuf: The userspace memory location to copy to 389 + * @cnt: The amount to copy 390 + * 391 + * Copies the sequence buffer into the userspace memory pointed to 392 + * by @ubuf. It starts from the last read position (@s->readpos) 393 + * and writes up to @cnt characters or till it reaches the end of 394 + * the content in the buffer (@s->len), which ever comes first. 395 + * 396 + * On success, it returns a positive number of the number of bytes 397 + * it copied. 398 + * 399 + * On failure it returns -EBUSY if all of the content in the 400 + * sequence has been already read, which includes nothing in the 401 + * sequenc (@s->len == @s->readpos). 402 + * 403 + * Returns -EFAULT if the copy to userspace fails. 404 + */ 405 + int trace_seq_to_user(struct trace_seq *s, char __user *ubuf, int cnt) 406 + { 407 + int len; 408 + int ret; 409 + 410 + if (!cnt) 411 + return 0; 412 + 413 + if (s->len <= s->readpos) 414 + return -EBUSY; 415 + 416 + len = s->len - s->readpos; 417 + if (cnt > len) 418 + cnt = len; 419 + ret = copy_to_user(ubuf, s->buffer + s->readpos, cnt); 420 + if (ret == cnt) 421 + return -EFAULT; 422 + 423 + cnt -= ret; 424 + 425 + s->readpos += cnt; 426 + return cnt; 427 + } 428 + EXPORT_SYMBOL_GPL(trace_seq_to_user);
+1 -1
samples/trace_events/trace-events-sample.h
··· 87 87 ), 88 88 89 89 TP_fast_assign( 90 - strncpy(__entry->foo, foo, 10); 90 + strlcpy(__entry->foo, foo, 10); 91 91 __entry->bar = bar; 92 92 ), 93 93