table of contents
- bookworm 6.1.115-1
- bookworm-backports 6.10.11-1~bpo12+1
- testing 6.11.5-1
- unstable 6.11.7-1
- experimental 6.12~rc6-1~exp1
PERF-TRACE(1) | perf Manual | PERF-TRACE(1) |
NAME¶
perf-trace - strace inspired tool
SYNOPSIS¶
perf trace perf trace record
DESCRIPTION¶
This command will show the events associated with the target, initially syscalls, but other system events like pagefaults, task lifetime events, scheduling events, etc.
This is a live mode tool in addition to working with perf.data files like the other perf tools. Files can be generated using the perf record command but the session needs to include the raw_syscalls events (-e raw_syscalls:*). Alternatively, perf trace record can be used as a shortcut to automatically include the raw_syscalls events when writing events to a file.
The following options apply to perf trace; options to perf trace record are found in the perf record man page.
OPTIONS¶
-a, --all-cpus
-e, --expr, --event
--filter=<filter>
Event filter. This option should follow an event selector (-e) which selects tracepoint event(s).
-D msecs, --delay msecs
-o, --output=
-p, --pid=
-t, --tid=
-u, --uid=
-G, --cgroup
Look for cgroups to set at the /sys/fs/cgroup/perf_event directory, then remove the /sys/fs/cgroup/perf_event/ part and try:
perf trace -G A -e sched:*switch
Will set all raw_syscalls:sys_{enter,exit}, pgfault, vfs_getname, etc _and_ sched:sched_switch to the 'A' cgroup, while:
perf trace -e sched:*switch -G A
will only set the sched:sched_switch event to the 'A' cgroup, all the other events (raw_syscalls:sys_{enter,exit}, etc are left "without" a cgroup (on the root cgroup, sys wide, etc).
Multiple cgroups:
perf trace -G A -e sched:*switch -G B
the syscall ones go to the 'A' cgroup, the sched:sched_switch goes to the 'B' cgroup.
--filter-pids=
-v, --verbose
--no-inherit
-m, --mmap-pages=
-C, --cpu
--duration
--sched
--failure
-i, --input
-T, --time
--comm
-s, --summary
-S, --with-summary
--errno-summary
--tool_stats
-f, --force
-F=[all|min|maj], --pf=[all|min|maj]
--syscalls
--call-graph [mode,type,min[,limit],order[,key][,branch]]
Using this will, for the root user, bump the value of --mmap-pages to 4 times the maximum for non-root users, based on the kernel.perf_event_mlock_kb sysctl. This is done only if the user doesn't specify a --mmap-pages value.
--kernel-syscall-graph
--max-events=N
--switch-on EVENT_NAME
--switch-off EVENT_NAME
--show-on-off-events
--max-stack
Implies '--call-graph dwarf' when --call-graph not present on the command line, on systems where DWARF unwinding was built in.
Default: /proc/sys/kernel/perf_event_max_stack when present for
live sessions (without --input/-i), 127 otherwise.
--min-stack
Implies '--call-graph dwarf' when --call-graph not present on the command line, on systems where DWARF unwinding was built in.
--print-sample
--proc-map-timeout
--sort-events
--libtraceevent_print
--map-dump
PAGEFAULTS¶
When tracing pagefaults, the format of the trace is as follows:
<min|maj>fault [<ip.symbol>+<ip.offset>] ⇒ <addr.dso@addr.offset> (<map type><addr level>).
For symbols resolution you may need to install debugging symbols.
Please be aware that duration is currently always 0 and doesn’t reflect actual time it took for fault to be handled!
When --verbose specified, perf trace tries to print all available information for both IP and fault address in the form of dso@symbol+offset.
EXAMPLES¶
Trace only major pagefaults:
$ perf trace --no-syscalls -F
Trace syscalls, major and minor pagefaults:
$ perf trace -F all
1416.547 ( 0.000 ms): python/20235 majfault [CRYPTO_push_info_+0x0] => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0@0x61be0 (x.)
As you can see, there was major pagefault in python process, from CRYPTO_push_info_ routine which faulted somewhere in libcrypto.so.
Trace the first 4 open, openat or open_by_handle_at syscalls (in the future more syscalls may match here):
$ perf trace -e open* --max-events 4 [root@jouet perf]# trace -e open* --max-events 4 2272.992 ( 0.037 ms): gnome-shell/1370 openat(dfd: CWD, filename: /proc/self/stat) = 31 2277.481 ( 0.139 ms): gnome-shell/3039 openat(dfd: CWD, filename: /proc/self/stat) = 65 3026.398 ( 0.076 ms): gnome-shell/3039 openat(dfd: CWD, filename: /proc/self/stat) = 65 4294.665 ( 0.015 ms): sed/15879 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC) = 3 $
Trace the first minor page fault when running a workload:
# perf trace -F min --max-stack=7 --max-events 1 sleep 1
0.000 ( 0.000 ms): sleep/18006 minfault [__clear_user+0x1a] => 0x5626efa56080 (?k)
__clear_user ([kernel.kallsyms])
load_elf_binary ([kernel.kallsyms])
search_binary_handler ([kernel.kallsyms])
__do_execve_file.isra.33 ([kernel.kallsyms])
__x64_sys_execve ([kernel.kallsyms])
do_syscall_64 ([kernel.kallsyms])
entry_SYSCALL_64 ([kernel.kallsyms]) #
Trace the next min page page fault to take place on the first CPU:
# perf trace -F min --call-graph=dwarf --max-events 1 --cpu 0
0.000 ( 0.000 ms): Web Content/17136 minfault [js::gc::Chunk::fetchNextDecommittedArena+0x4b] => 0x7fbe6181b000 (?.)
js::gc::FreeSpan::initAsEmpty (inlined)
js::gc::Arena::setAsNotAllocated (inlined)
js::gc::Chunk::fetchNextDecommittedArena (/usr/lib64/firefox/libxul.so)
js::gc::Chunk::allocateArena (/usr/lib64/firefox/libxul.so)
js::gc::GCRuntime::allocateArena (/usr/lib64/firefox/libxul.so)
js::gc::ArenaLists::allocateFromArena (/usr/lib64/firefox/libxul.so)
js::gc::GCRuntime::tryNewTenuredThing<JSString, (js::AllowGC)1> (inlined)
js::AllocateString<JSString, (js::AllowGC)1> (/usr/lib64/firefox/libxul.so)
js::Allocate<JSThinInlineString, (js::AllowGC)1> (inlined)
JSThinInlineString::new_<(js::AllowGC)1> (inlined)
AllocateInlineString<(js::AllowGC)1, unsigned char> (inlined)
js::ConcatStrings<(js::AllowGC)1> (/usr/lib64/firefox/libxul.so)
[0x18b26e6bc2bd] (/tmp/perf-17136.map) #
Trace the next two sched:sched_switch events, four block:*_plug events, the next block:*_unplug and the next three net:*dev_queue events, this last one with a backtrace of at most 16 entries, system wide:
# perf trace -e sched:*switch/nr=2/,block:*_plug/nr=4/,block:*_unplug/nr=1/,net:*dev_queue/nr=3,max-stack=16/
0.000 :0/0 sched:sched_switch:swapper/2:0 [120] S ==> rcu_sched:10 [120]
0.015 rcu_sched/10 sched:sched_switch:rcu_sched:10 [120] R ==> swapper/2:0 [120]
254.198 irq/50-iwlwifi/680 net:net_dev_queue:dev=wlp3s0 skbaddr=0xffff93498051f600 len=66
__dev_queue_xmit ([kernel.kallsyms])
273.977 :0/0 net:net_dev_queue:dev=wlp3s0 skbaddr=0xffff93498051f600 len=78
__dev_queue_xmit ([kernel.kallsyms])
274.007 :0/0 net:net_dev_queue:dev=wlp3s0 skbaddr=0xffff93498051ff00 len=78
__dev_queue_xmit ([kernel.kallsyms]) 2930.140 kworker/u16:58/2722 block:block_plug:[kworker/u16:58] 2930.162 kworker/u16:58/2722 block:block_unplug:[kworker/u16:58] 1 4466.094 jbd2/dm-2-8/748 block:block_plug:[jbd2/dm-2-8] 8050.123 kworker/u16:30/2694 block:block_plug:[kworker/u16:30] 8050.271 kworker/u16:30/2694 block:block_plug:[kworker/u16:30] #
SEE ALSO¶
2024-11-10 | perf |