Scroll to navigation

PYINSTRUMENT(1) pyinstrument PYINSTRUMENT(1)

NAME

pyinstrument - pyinstrument 5.1.1

PYINSTRUMENT

[image: Screenshot] [image]
<https://github.com/joerick/pyinstrument/raw/main/docs/img/screenshot.jpg>

Pyinstrument is a Python profiler. A profiler is a tool to help you optimize your code - make it faster. To get the biggest speed increase you should focus on the slowest part of your program <https://en.wikipedia.org/wiki/Amdahl%27s_law>. Pyinstrument helps you find it!

☕️ Not sure where to start? Check out this video tutorial from calmcode.io <https://calmcode.io/pyinstrument/introduction.html>!


USER GUIDE

Installation

pip install pyinstrument


Pyinstrument supports Python 3.8+.

Profile a Python script

Call Pyinstrument directly from the command line. Instead of writing python3 script.py, type pyinstrument script.py. Your script will run as normal, and at the end (or when you press ^C), Pyinstrument will output a colored summary showing where most of the time was spent.

Here are the options you can use:

Usage: pyinstrument [options] scriptfile [arg] ...
Options:

--version show program's version number and exit
-h, --help show this help message and exit
--load=FILENAME instead of running a script, load a profile session
from a pyisession file
--load-prev=IDENTIFIER
instead of running a script, load a previous profile
session as specified by an identifier
-m MODULE run library module as a script, like 'python -m
module'
-c PROGRAM program passed in as string, like 'python -c "..."'
--from-path (POSIX only) instead of the working directory, look
for scriptfile in the PATH environment variable
-o OUTFILE, --outfile=OUTFILE
save to <outfile>
-r RENDERER, --renderer=RENDERER
how the report should be rendered. One of: 'text',
'html', 'json', 'speedscope', 'pyisession', 'pstats',
or python import path to a renderer class. Defaults to
the appropriate format for the extension if OUTFILE is
given, otherwise, defaults to 'text'.
-p RENDER_OPTION, --render-option=RENDER_OPTION
options to pass to the renderer, in the format
'flag_name' or 'option_name=option_value'. For
example, to set the option 'time', pass '-p
time=percent_of_total'. To pass multiple options, use
the -p option multiple times. You can set processor
options using dot-syntax, like '-p
processor_options.filter_threshold=0'. option_value is
parsed as a JSON value or a string.
-t, --timeline render as a timeline - preserve ordering and don't
condense repeated calls
--hide=EXPR glob-style pattern matching the file paths whose
frames to hide. Defaults to hiding non-application
code
--hide-regex=REGEX regex matching the file paths whose frames to hide.
Useful if --hide doesn't give enough control.
--show=EXPR glob-style pattern matching the file paths whose
frames to show, regardless of --hide or --hide-regex.
For example, use --show '*/<library>/*' to show frames
within a library that would otherwise be hidden.
--show-regex=REGEX regex matching the file paths whose frames to always
show. Useful if --show doesn't give enough control.
--show-all show everything
--unicode (text renderer only) force unicode text output
--no-unicode (text renderer only) force ascii text output
--color (text renderer only) force ansi color text output
--no-color (text renderer only) force no color text output
-i INTERVAL, --interval=INTERVAL
Minimum time, in seconds, between each stack sample.
Smaller values allow resolving shorter duration
function calls but incur a greater runtime and memory
consumption overhead. For longer running scripts,
setting a larger interval reduces the memory
consumption required to store the stack samples.
--use-timing-thread Use a separate thread to time the interval between
stack samples. This can reduce the overhead of
sampling on some systems.


Protip: -r html will give you a interactive profile report as HTML - you can really explore this way!

Profile a Python CLI command

For profiling an installed Python script via the "console_script" entry point <https://packaging.python.org/en/latest/specifications/entry-points/#use-for-scripts>, call Pyinstrument directly from the command line with the --from-path flag. Instead of writing cli-script, type pyinstrument --from-path cli-script. Your script will run as normal, and at the end (or when you press ^C), Pyinstrument will output a colored summary showing where most of the time was spent.

Profile a specific chunk of code

Pyinstrument also has a Python API. You can use a with-block, like this:

import pyinstrument
with pyinstrument.profile():

# code you want to profile


Or you can decorate a function/method, like this:

import pyinstrument
@pyinstrument.profile()
def my_function():

# code you want to profile


There's also a lower-level API called Profiler, that's more flexible:

from pyinstrument import Profiler
profiler = Profiler()
profiler.start()
# code you want to profile
profiler.stop()
profiler.print()


If you get "No samples were recorded." because your code executed in under 1ms, hooray! If you still want to instrument the code, set an interval value smaller than the default 0.001 (1 millisecond) like this:

pyinstrument.profile(interval=0.0001)
# or,
profiler = Profiler(interval=0.0001)
...


Experiment with the interval value to see different depths, but keep in mind that smaller intervals could affect the performance overhead of profiling.

Protip: To explore the profile in a web browser, use profiler.open_in_browser() <#pyinstrument.Profiler.open_in_browser>. To save this HTML for later, use profiler.output_html() <#pyinstrument.Profiler.output_html>.

Profile code in Jupyter/IPython

Via IPython magics <https://ipython.readthedocs.io/en/stable/interactive/magics.html>, you can profile a line or a cell in IPython or Jupyter.

Example:

%load_ext pyinstrument


%%pyinstrument
import time
def a():

b()
c() def b():
d() def c():
d() def d():
e() def e():
time.sleep(1) a()


To customize options, see %%pyinstrument??.

Profile a web request in Django

To profile Django web requests, add pyinstrument.middleware.ProfilerMiddleware to MIDDLEWARE in your settings.py.

Profile specific request

Once installed, add ?profile to the end of a request URL to activate the profiler. Your request will run as normal, but instead of getting the response, you'll get pyinstrument's analysis of the request in a web page.

Save all requests to a directory

If you're writing an API, it's not easy to change the URL when you want to profile something. In this case, add PYINSTRUMENT_PROFILE_DIR = 'profiles' to your settings.py. Pyinstrument will profile every request and save the HTML output to the folder profiles in your working directory.

Custom file name by string

You can further customize the filename by adding PYINSTRUMENT_FILENAME to settings.py, default value is "{total_time:.3f}s {path} {timestamp:.0f}.{ext}".

Custom file name by callback function

For more control you can provide a callback function by adding PYINSTRUMENT_FILENAME_CALLBACK to settings.py, that returns a filename as a string.

def get_pyinstrument_filename(request, session, renderer):

path = request.get_full_path().replace("/", "_")[:100]
ext = renderer.output_file_extension
filename = f"{request.method}_{session.duration}{path}.{ext}"
return filename PYINSTRUMENT_FILENAME_CALLBACK = get_pyinstrument_filename


(This callback takes precedence over PYINSTRUMENT_FILENAME).

Control shown profiling page

If you want to show the profiling page depending on the request you can define PYINSTRUMENT_SHOW_CALLBACK as dotted path to a function used for determining whether the page should show or not. You can provide your own function callback(request) which returns True or False in your settings.py.

def custom_show_pyinstrument(request):

return request.user.is_superuser PYINSTRUMENT_SHOW_CALLBACK = "%s.custom_show_pyinstrument" % __name__


You can configure the profile output type using setting's variable PYINSTRUMENT_PROFILE_DIR_RENDERER. Default value is pyinstrument.renderers.HTMLRenderer. The supported renderers are pyinstrument.renderers.JSONRenderer, pyinstrument.renderers.HTMLRenderer, pyinstrument.renderers.SpeedscopeRenderer.

Profile a web request in Flask

A simple setup to profile a Flask application is the following:

from flask import Flask, g, make_response, request
app = Flask(__name__)
@app.before_request
def before_request():

if "profile" in request.args:
g.profiler = Profiler()
g.profiler.start() @app.after_request def after_request(response):
if not hasattr(g, "profiler"):
return response
g.profiler.stop()
output_html = g.profiler.output_html()
return make_response(output_html)


This will check for the ?profile query param on each request and if found, it starts profiling. After each request where the profiler was running it creates the html output and returns that instead of the actual response.

Profile a web request in FastAPI

To profile call stacks in FastAPI, you can write a middleware extension for pyinstrument.

Create an async function and decorate with app.middleware('http') where app is the name of your FastAPI application instance.

Make sure you configure a setting to only make this available when required.

from fastapi import Request
from fastapi.responses import HTMLResponse
from pyinstrument import Profiler
PROFILING = True  # Set this from a settings model
if PROFILING:

@app.middleware("http")
async def profile_request(request: Request, call_next):
profiling = request.query_params.get("profile", False)
if profiling:
profiler = Profiler()
profiler.start()
await call_next(request)
profiler.stop()
return HTMLResponse(profiler.output_html())
else:
return await call_next(request)


To invoke, make any request to your application with the GET parameter profile=1 and it will print the HTML result from pyinstrument.

Profile a web request in Falcon

For profile call stacks in Falcon, you can write a middleware extension using pyinstrument.

Create a middleware class and start the profiler at process_request and stop it at process_response. The middleware can be added to the app.

Make sure you configure a setting to only make this available when required.

from pyinstrument import Profiler
import falcon
class ProfilerMiddleware:

def __init__(self, interval=0.01):
self.profiler = Profiler(interval=interval)
def process_request(self, req, resp):
self.profiler.start()
def process_response(self, req, resp, resource, req_succeeded):
self.profiler.stop()
self.profiler.open_in_browser() PROFILING = True # Set this from a settings model app = falcon.App() if PROFILING:
app.add_middleware(ProfilerMiddleware())


To invoke, make any request to your application and it launch a new window printing the HTML result from pyinstrument.

Profile a web request in Litestar

Minimal application setup allowing request profiling.

The middleware overrides the response to return a profiling report in HTML format.

from __future__ import annotations
from asyncio import sleep
from litestar import Litestar, get
from litestar.middleware import MiddlewareProtocol
from litestar.types import ASGIApp, Message, Receive, Scope, Send
from pyinstrument import Profiler
class ProfilingMiddleware(MiddlewareProtocol):

def __init__(self, app: ASGIApp) -> None:
super().__init__(app) # type: ignore
self.app = app
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
profiler = Profiler(interval=0.001, async_mode="enabled")
profiler.start()
profile_html: str | None = None
async def send_wrapper(message: Message) -> None:
if message["type"] == "http.response.start":
profiler.stop()
nonlocal profile_html
profile_html = profiler.output_html()
message["headers"] = [
(b"content-type", b"text/html; charset=utf-8"),
(b"content-length", str(len(profile_html)).encode()),
]
elif message["type"] == "http.response.body":
assert profile_html is not None
message["body"] = profile_html.encode()
await send(message)
await self.app(scope, receive, send_wrapper) @get("/") async def index() -> str:
await sleep(1)
return "Hello, world!" app = Litestar(
route_handlers=[index],
middleware=[ProfilingMiddleware], )


To invoke, make any request to your application and it will return the HTML result from pyinstrument instead of your application's response.

Profile a web request in aiohttp.web

You can use a simple middleware to profile aiohttp web server requests with Pyinstrument:

from aiohttp import web
from pyinstrument import Profiler
@web.middleware
async def profiler_middleware(request, handler):

with Profiler() as p:
await handler(request)
return web.Response(text=p.output_html(), content_type="text/html") app = web.Application(middlewares=(profiler_middleware,))


Pyinstrument's HTML output will be returned as response, showing the profiling result of each request.

Make use of aiohttp.web development CLI feature to isolate configurations and make sure profiling is only enabled when needed:

...
def dev_app(argv):

app = web.Application(middlewares=(profiler_middleware,))
app.add_routes(routes)
return app # for development if __name__ == '__main__':
app = web.Application()
app.add_routes(routes)
web.run_app(...) # for deployment


python3 -m aiohttp.web app:dev_app # develop with profiling and debug enabled
python3 ./app.py # run app without profiling


Profile Pytest tests

Pyinstrument can be invoked via the command-line to run pytest, giving you a consolidated report for the test suite.

pyinstrument -m pytest [pytest-args...]


Or, to instrument specific tests, create and auto-use fixture in conftest.py in your test folder:

from pathlib import Path
import pytest
from pyinstrument import Profiler
TESTS_ROOT = Path.cwd()
@pytest.fixture(autouse=True)
def auto_profile(request):

PROFILE_ROOT = (TESTS_ROOT / ".profiles")
# Turn profiling on
profiler = Profiler()
profiler.start()
yield # Run test
profiler.stop()
PROFILE_ROOT.mkdir(exist_ok=True)
results_file = PROFILE_ROOT / f"{request.node.name}.html"
profiler.write_html(results_file)


This will generate a HTML file for each test node in your test suite inside the .profiles directory.

Profile something else?

I'd love to have more ways to profile using Pyinstrument - e.g. other web frameworks. PRs are encouraged!

HOW IT WORKS

Pyinstrument interrupts the program every 1ms[1] and records the entire stack at that point. It does this using a C extension and PyEval_SetProfile, but only taking readings every 1ms. Check out this blog post <http://joerick.me/posts/2017/12/15/pyinstrument-20/> for more info.

You might be surprised at how few samples make up a report, but don't worry, it won't decrease accuracy. The default interval of 1ms is a lower bound for recording a stackframe, but if there is a long time spent in a single function call, it will be recorded at the end of that call. So effectively those samples were 'bunched up' and recorded at the end.

Statistical profiling (not tracing)

Pyinstrument is a statistical profiler - it doesn't track every function call that your program makes. Instead, it's recording the call stack every 1ms.

That gives some advantages over other profilers. Firstly, statistical profilers are much lower-overhead than tracing profilers.

Django template render × 4000 Overhead
Base ████████████████ 0.33s
pyinstrument ████████████████████ 0.43s 30%
cProfile █████████████████████████████ 0.61s 84%
profile ██████████████████████████████████...██ 6.79s 2057%

But low overhead is also important because it can distort the results. When using a tracing profiler, code that makes a lot of Python function calls invokes the profiler a lot, making it slower. This distorts the results, and might lead you to optimise the wrong part of your program!

Full-stack recording

The standard Python profilers profile <http://docs.python.org/2/library/profile.html#module-profile> and cProfile <http://docs.python.org/2/library/profile.html#module-cProfile> show you a big list of functions, ordered by the time spent in each function. This is great, but it can be difficult to interpret why those functions are getting called. It's more helpful to know why those functions are called, and which parts of user code were involved.

For example, let's say I want to figure out why a web request in Django is slow. If I use cProfile, I might get this:

151940 function calls (147672 primitive calls) in 1.696 seconds

Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.696 1.696 profile:0(<code object <module> at 0x1053d6a30, file "./manage.py", line 2>)
1 0.001 0.001 1.693 1.693 manage.py:2(<module>)
1 0.000 0.000 1.586 1.586 __init__.py:394(execute_from_command_line)
1 0.000 0.000 1.586 1.586 __init__.py:350(execute)
1 0.000 0.000 1.142 1.142 __init__.py:254(fetch_command)
43 0.013 0.000 1.124 0.026 __init__.py:1(<module>)
388 0.008 0.000 1.062 0.003 re.py:226(_compile)
158 0.005 0.000 1.048 0.007 sre_compile.py:496(compile)
1 0.001 0.001 1.042 1.042 __init__.py:78(get_commands)
153 0.001 0.000 1.036 0.007 re.py:188(compile)
106/102 0.001 0.000 1.030 0.010 __init__.py:52(__getattr__)
1 0.000 0.000 1.029 1.029 __init__.py:31(_setup)
1 0.000 0.000 1.021 1.021 __init__.py:57(_configure_logging)
2 0.002 0.001 1.011 0.505 log.py:1(<module>)


It's often hard to understand how your own code relates to these traces.

Pyinstrument records the entire stack, so tracking expensive calls is much easier. It also hides library frames by default, letting you focus on your app/module is affecting performance.


_ ._ __/__ _ _ _ _ _/_ Recorded: 14:53:35 Samples: 131
/_//_/// /_\ / //_// / //_'/ // Duration: 3.131 CPU time: 0.195 / _/ v3.0.0b3 Program: examples/django_example/manage.py runserver --nothreading --noreload 3.131 <module> manage.py:2 └─ 3.118 execute_from_command_line django/core/management/__init__.py:378
[473 frames hidden] django, socketserver, selectors, wsgi...
2.836 select selectors.py:365
0.126 _get_response django/core/handlers/base.py:96
└─ 0.126 hello_world django_example/views.py:4


'Wall-clock' time (not CPU time)

Pyinstrument records duration using 'wall-clock' time. When you're writing a program that downloads data, reads files, and talks to databases, all that time is included in the tracked time by pyinstrument.

That's really important when debugging performance problems, since Python is often used as a 'glue' language between other services. The problem might not be in your program, but you should still be able to find why it's slow.

Async profiling

pyinstrument can profile async programs that use async and await. This async support works by tracking the 'context' of execution, as provided by the built-in contextvars <https://docs.python.org/3/library/contextvars.html> module.

When you start a Profiler with the async_mode <#pyinstrument.Profiler.async_mode> enabled or strict (not disabled), that Profiler is attached to the current async context.

When profiling, pyinstrument keeps an eye on the context. When execution exits the context, it captures the await stack that caused the context to exit. Any time spent outside the context is attributed to the that halted execution of the await.

Async contexts are inherited, so tasks started when a profiler is active are also profiled.

[image: Async context inheritance] [image]

pyinstrument supports async mode with Asyncio and Trio, other async/await frameworks should work as long as they use contextvars <https://docs.python.org/3/library/contextvars.html>.

Greenlet <https://pypi.org/project/greenlet/> doesn't use async and await, and alters the Python stack during execution, so is not fully supported. However, because greenlet also supports contextvars <https://docs.python.org/3/library/contextvars.html>, we can limit profiling to one green thread, using strict mode. In strict mode, whenever your green thread is halted the time will be tracked in an <out-of-context> frame. Alternatively, if you want to see what's happening when your green thread is halted, you can use async_mode='disabled' - just be aware that readouts might be misleading if multiple tasks are running concurrently.


----



[1]
Or, your configured interval.

API REFERENCE

Command line interface

pyinstrument works just like python, on the command line, so you can call your scripts like pyinstrument script.py or pyinstrument -m my_module.

When your script ends, or when you kill it with ctrl-c, pyinstrument will print a profile report to the console.

Usage: pyinstrument [options] scriptfile [arg] ...
Options:

--version show program's version number and exit
-h, --help show this help message and exit
--load=FILENAME instead of running a script, load a profile session
from a pyisession file
--load-prev=IDENTIFIER
instead of running a script, load a previous profile
session as specified by an identifier
-m MODULE run library module as a script, like 'python -m
module'
-c PROGRAM program passed in as string, like 'python -c "..."'
--from-path (POSIX only) instead of the working directory, look
for scriptfile in the PATH environment variable
-o OUTFILE, --outfile=OUTFILE
save to <outfile>
-r RENDERER, --renderer=RENDERER
how the report should be rendered. One of: 'text',
'html', 'json', 'speedscope', 'pyisession', 'pstats',
or python import path to a renderer class. Defaults to
the appropriate format for the extension if OUTFILE is
given, otherwise, defaults to 'text'.
-p RENDER_OPTION, --render-option=RENDER_OPTION
options to pass to the renderer, in the format
'flag_name' or 'option_name=option_value'. For
example, to set the option 'time', pass '-p
time=percent_of_total'. To pass multiple options, use
the -p option multiple times. You can set processor
options using dot-syntax, like '-p
processor_options.filter_threshold=0'. option_value is
parsed as a JSON value or a string.
-t, --timeline render as a timeline - preserve ordering and don't
condense repeated calls
--hide=EXPR glob-style pattern matching the file paths whose
frames to hide. Defaults to hiding non-application
code
--hide-regex=REGEX regex matching the file paths whose frames to hide.
Useful if --hide doesn't give enough control.
--show=EXPR glob-style pattern matching the file paths whose
frames to show, regardless of --hide or --hide-regex.
For example, use --show '*/<library>/*' to show frames
within a library that would otherwise be hidden.
--show-regex=REGEX regex matching the file paths whose frames to always
show. Useful if --show doesn't give enough control.
--show-all show everything
--unicode (text renderer only) force unicode text output
--no-unicode (text renderer only) force ascii text output
--color (text renderer only) force ansi color text output
--no-color (text renderer only) force no color text output
-i INTERVAL, --interval=INTERVAL
Minimum time, in seconds, between each stack sample.
Smaller values allow resolving shorter duration
function calls but incur a greater runtime and memory
consumption overhead. For longer running scripts,
setting a larger interval reduces the memory
consumption required to store the stack samples.
--use-timing-thread Use a separate thread to time the interval between
stack samples. This can reduce the overhead of
sampling on some systems.


Python API

The Python API is also available, for calling pyinstrument directly from Python and writing integrations with with other tools.

The profile function

For example:

with pyinstrument.profile():

time.sleep(1)


This will print something like:

pyinstrument ........................................
.
.  Block at testfile.py:2
.
.  1.000 <module>  testfile.py:1
.  └─ 1.000 sleep  <built-in>
.
.....................................................


You can also use it as a function/method decorator, like this:

@pyinstrument.profile()
def my_function():

time.sleep(1)


Creates a context-manager or function decorator object, which profiles the given code and prints the output to stdout.

The interval, async_mode and use_timing_thread parameters are passed through to the underlying pyinstrument.Profiler object.

You can pass a renderer to customise the output. By default, it uses a ConsoleRenderer with short_mode set.


The Profiler object

The profiler - this is the main way to use pyinstrument.

Note the profiling will not start until start() is called.

  • interval (float) -- See interval.
  • async_mode (AsyncMode) -- See async_mode.
  • use_timing_thread (bool | None) -- If True, the profiler will use a separate thread to keep track of time. This is useful if you're on a system where getting the time has significant overhead.


The minimum time, in seconds, between each stack sample. This translates into the resolution of the sampling.

Configures how this Profiler tracks time in a program that uses async/await.
When this profiler sees an await, time is logged in the function that awaited, rather than observing other coroutines or the event loop.
This profiler doesn't attempt to track await. In a program that uses async/await, this will interleave other coroutines and event loop machinery in the profile. Use this option if async support is causing issues in your use case, or if you want to run multiple profilers at once.
Instructs the profiler to only profile the current async context <https://docs.python.org/3/library/contextvars.html>. Frames that are observed in an other context are ignored, tracked instead as <out-of-context>.


The previous session recorded by the Profiler.

Instructs the profiler to start - to begin observing the program's execution and recording frames.

The normal way to invoke start() is with a new instance, but you can restart a Profiler that was previously running, too. The sessions are combined.

caller_frame (FrameType | None) --

Set this to override the default behaviour of treating the caller of start() as the 'start_call_stack' - the instigator of the profile. Most renderers will trim the 'root' from the call stack up to this frame, to present a simpler output.

You might want to set this to inspect.currentframe().f_back if you are writing a library that wraps pyinstrument.



Stops the profiler observing, and sets last_session to the captured session.
The captured session.
Session <#pyinstrument.session.Session>


Returns True if this profiler is running - i.e. observing the program execution.

Resets the Profiler, clearing the last_session.

__enter__()
Context manager support.

Profilers can be used in with blocks! See this example:

with Profiler() as p:

# your code here...
do_some_work() # profiling has ended. let's print the output. p.print()



Print the captured profile to the console, as rendered by renderers.ConsoleRenderer
file (IO[str]) -- the IO stream to write to. Could be a file descriptor or sys.stdout, sys.stderr. Defaults to sys.stdout.

See renderers.ConsoleRenderer for the other parameters.


Return the profile output as text, as rendered by ConsoleRenderer

See renderers.ConsoleRenderer for parameter description.



Return the profile output as HTML, as rendered by HTMLRenderer


Writes the profile output as HTML to a file, as rendered by HTMLRenderer


Opens the last profile session in your web browser.


Returns the last profile session, as rendered by renderer.
renderer (Renderer) -- The renderer to use.



Sessions

Represents a profile session, contains the data collected during a profile session.

Load a previously saved session from disk.
filename (str | PathLike[str]) -- The path to load from.
Session <#pyinstrument.session.Session>


Saves a Session object to disk, in a JSON format.
filename (str | PathLike[str]) -- The path to save to. Using the .pyisession extension is recommended.


Combines two Session objects.

Sessions that are joined in this way probably shouldn't be interpreted as timelines, because the samples are simply concatenated. But aggregate views (the default) of this data will work.

Session <#pyinstrument.session.Session>


Parses the internal frame records and returns a tree of Frame objects. This object can be rendered using a Renderer object.
A Frame object, or None if the session is empty.


Shorten a path to a more readable form, relative to sys_path. Used by Frame.short_file_path.



Renderers

Renderers transform a tree of Frame objects into some form of output.

Rendering has two steps:

1.
First, the renderer will 'preprocess' the Frame tree, applying each processor in the processor property, in turn.
2.
The resulting tree is rendered into the desired format.

Therefore, rendering can be customised by changing the processors property. For example, you can disable time-aggregation (making the profile into a timeline) by removing aggregate_repeated_calls().

An abstract base class for renderers that process Frame objects using processor functions. Provides a common interface to manipulate the processors before rendering.
  • show_all (bool) -- Don't hide or filter frames - show everything that pyinstrument captures.
  • timeline (bool) -- Instead of aggregating time, leave the samples in chronological order.
  • processor_options (dict[str, Any]) -- A dictionary of processor options.


Processors installed on this renderer. This property is defined on the base class to provide a common way for users to add and manipulate them before calling render().

Dictionary containing processor options, passed to each processor.

Return a list of processors that this renderer uses by default.


Return a string that contains the rendered form of frame.



Produces text-based output, suitable for text files or ANSI-compatible consoles.
  • unicode (bool) -- Use unicode, like box-drawing characters in the output.
  • color (bool) -- Enable color support, using ANSI color sequences.
  • flat (bool) -- Display a flat profile instead of a call graph.
  • time (LiteralStr['seconds', 'percent_of_total']) -- How to display the duration of each frame - 'seconds' or 'percent_of_total'
  • flat_time (FlatTimeMode) -- Show 'self' time or 'total' time (including children) in flat profile.
  • short_mode (bool) -- Display a short version of the output.
  • show_all (bool) -- See FrameRenderer.
  • timeline (bool) -- See FrameRenderer.
  • processor_options (dict[str, Any]) -- See FrameRenderer.



Renders a rich, interactive web page, as a string of HTML.


Outputs a tree of JSON, containing processed frames.
  • show_all -- Don't hide or filter frames - show everything that pyinstrument captures.
  • timeline -- Instead of aggregating time, leave the samples in chronological order.
  • processor_options -- A dictionary of processor options.



Outputs a tree of JSON conforming to the speedscope schema documented at

wiki: <https://github.com/jlfwong/speedscope/wiki/Importing-from-custom-sources> schema: <https://www.speedscope.app/file-format-schema.json> spec: <https://github.com/jlfwong/speedscope/blob/main/src/lib/file-format-spec.ts> example: <https://github.com/jlfwong/speedscope/blob/main/sample/profiles/speedscope/0.0.1/simple.speedscope.json>

  • show_all -- Don't hide or filter frames - show everything that pyinstrument captures.
  • timeline -- Instead of aggregating time, leave the samples in chronological order.
  • processor_options -- A dictionary of processor options.



Processors

Processors are functions that take a Frame object, and mutate the tree to perform some task.

They can mutate the tree in-place, but also can change the root frame, they should always be called like:

frame = processor(frame, options=...)


Removes <frozen importlib._bootstrap frames that clutter the output.


Removes frames that have set a local __tracebackhide__ (e.g. __tracebackhide__ = True), to hide them from the output.


Converts a timeline into a time-aggregate summary.

Adds together calls along the same call stack, so that repeated calls appear as the same frame. Removes time-linearity - frames are sorted according to total time spent.

Useful for outputs that display a summary of execution (e.g. text and html outputs)



Groups frames that should be hidden into FrameGroup objects, according to hide_regex and show_regex in the options dict, as applied to the file path of the source code of the frame. If both match, 'show' has precedence. Options:
regular expression, which if matches the file path, hides the frame in a frame group.
regular expression, which if matches the file path, ensures the frame is not hidden

Single frames are not grouped, there must be at least two frames in a group.




When a frame has only one child, and that is a self-time frame, remove that node and move the time to parent, since it's unnecessary - it clutters the output and offers no additional information.


Remove nodes that represent less than e.g. 1% of the output. Options:
sets the minimum duration of a frame to be included in the output. Default: 0.01.



The first few frames when using the command line are the __main__ of pyinstrument, the eval, and the 'runpy' module. I want to remove that from the output.


Internals notes

Frames are recorded by the Profiler in a time-linear fashion. While profiling, the profiler builds a list of frame stacks, with the frames having in format:

function_name <null> filename <null> function_line_number


When profiling is complete, this list is turned into a tree structure of Frame objects. This tree contains all the information as gathered by the profiler, suitable for a flame render.

Frame objects, the call tree, and processors

The frames are assembled to a call tree by the profiler session. The time-linearity is retained at this stage.

Before rendering, the call tree is then fed through a sequence of 'processors' to transform the tree for output.

The most interesting is aggregate_repeated_calls, which combines different instances of function calls into the same frame. This is intuitive as a summary of where time was spent during execution.

The rest of the processors focus on removing or hiding irrelevant Frames from the output.

Self time frames vs. frame.self_time

Self time nodes exist to record time spent in a node, but not in its children. But normal frame objects can have self_time too. Why? frame.self_time is used to store the self_time of any nodes that were removed during processing.

INDICES AND TABLES

  • Index <>
  • Search Page <>

Author

Joe Rickerby

Copyright

2021, Joe Rickerby

November 9, 2025