Top Related Projects
🔬 A fast, interactive web-based viewer for performance profiles.
Stack trace visualizer
FlameScope is a visualization tool for exploring different time ranges as Flame Graphs.
🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and not maintained.
pprof is a tool for visualization and analysis of profiling data
Quick Overview
The what-studio/profiling
repository is a Python library that provides a simple and lightweight profiling tool for Python applications. It allows developers to measure the performance of their code and identify bottlenecks, making it easier to optimize and improve the efficiency of their applications.
Pros
- Lightweight and Easy to Use: The library is designed to be easy to integrate into existing projects, with minimal overhead and configuration required.
- Detailed Profiling Information: The library provides detailed information about the execution time and resource usage of individual functions and methods, making it easier to identify performance issues.
- Flexible Reporting: The profiling data can be output in various formats, including text, JSON, and HTML, allowing developers to choose the format that best suits their needs.
- Cross-Platform Compatibility: The library is compatible with a wide range of Python versions and platforms, making it a versatile tool for developers working on different projects and environments.
Cons
- Limited Functionality: While the library provides basic profiling capabilities, it may not offer the same level of advanced features and customization options as some other profiling tools.
- Potential Performance Impact: Depending on the complexity of the application and the frequency of profiling, the library may have a noticeable impact on the overall performance of the application.
- Lack of Visualization: The library does not provide built-in visualization tools, which may make it more difficult to interpret the profiling data for some users.
- Limited Community Support: The project has a relatively small community, which may limit the availability of documentation, tutorials, and support resources.
Code Examples
Here are a few examples of how to use the what-studio/profiling
library:
- Basic Profiling:
from profiling import profile
@profile
def my_function(arg1, arg2):
# Function implementation
pass
my_function(1, 2)
- Profiling with Custom Metrics:
from profiling import profile
@profile(metrics=['cpu_time', 'memory_usage'])
def my_function(arg1, arg2):
# Function implementation
pass
my_function(1, 2)
- Profiling a Whole Module:
from profiling import profile_module
profile_module('my_module')
- Saving Profiling Data to a File:
from profiling import profile
@profile(output_file='profile_data.json')
def my_function(arg1, arg2):
# Function implementation
pass
my_function(1, 2)
Getting Started
To get started with the what-studio/profiling
library, follow these steps:
- Install the library using pip:
pip install what-studio-profiling
- Import the necessary functions and start profiling your code:
from profiling import profile
@profile
def my_function(arg1, arg2):
# Function implementation
pass
my_function(1, 2)
- Customize the profiling options as needed, such as specifying custom metrics or saving the data to a file:
from profiling import profile
@profile(metrics=['cpu_time', 'memory_usage'], output_file='profile_data.json')
def my_function(arg1, arg2):
# Function implementation
pass
my_function(1, 2)
- Review the profiling data in the desired format (text, JSON, or HTML) to identify performance bottlenecks and optimize your code accordingly.
For more detailed information and advanced usage, please refer to the project's README and documentation.
Competitor Comparisons
🔬 A fast, interactive web-based viewer for performance profiles.
Pros of Speedscope
- Speedscope provides a more visually appealing and interactive flame graph visualization compared to what-studio/profiling.
- Speedscope supports a wider range of input formats, including Chrome Trace, Firefox Profiler, and more.
- Speedscope has a more user-friendly interface with features like zooming, panning, and filtering.
Cons of Speedscope
- Speedscope may have a steeper learning curve compared to the more straightforward what-studio/profiling.
- Speedscope is a standalone application, while what-studio/profiling is integrated into the What Studio IDE.
- Speedscope may not provide the same level of integration and workflow as what-studio/profiling within the What Studio ecosystem.
Code Comparison
Speedscope:
function renderFlameGraph(data) {
const flamegraph = new Flamegraph(data, {
onClick: (node) => {
console.log(`Clicked on ${node.name}`);
},
onHover: (node) => {
console.log(`Hovered over ${node.name}`);
},
});
flamegraph.render(document.getElementById('flame-graph'));
}
what-studio/profiling:
def profile_function(func):
@wraps(func)
def wrapper(*args, **kwargs):
with profiler.profile() as prof:
result = func(*args, **kwargs)
prof.export_chrome_trace("profile.json")
return result
return wrapper
Stack trace visualizer
Pros of FlameGraph
- FlameGraph is a widely-used and well-established tool for visualizing performance data, with a large user community and extensive documentation.
- The tool supports a variety of input formats, including perf, DTrace, and LTTng, making it versatile for different profiling use cases.
- FlameGraph provides a clear and intuitive visualization of performance data, making it easier to identify performance bottlenecks and optimize code.
Cons of FlameGraph
- FlameGraph is primarily a visualization tool and does not provide the same level of profiling capabilities as what-studio/profiling, which includes features like call graph analysis and CPU/memory profiling.
- The tool may have a steeper learning curve for users who are not familiar with the underlying profiling tools and data formats.
Code Comparison
FlameGraph:
#!/usr/bin/env perl
use strict;
use warnings;
my $in = shift || die "Usage: $0 input.folded\n";
my %seen;
while (<>) {
chomp;
my @lines = split /;/, $_;
my $frame = pop @lines;
my $count = $frame =~ s/ +(\d+)$//r;
my $func = join ";", @lines;
$seen{$func} += $count;
}
for my $func (sort { $seen{$b} <=> $seen{$a} } keys %seen) {
print "$func $seen{$func}\n";
}
what-studio/profiling:
import os
import sys
import time
import psutil
import signal
import subprocess
from collections import defaultdict
def profile(cmd, output_file):
start_time = time.time()
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while process.poll() is None:
time.sleep(0.1)
if time.time() - start_time > 60:
os.kill(process.pid, signal.SIGINT)
break
with open(output_file, 'w') as f:
f.write(process.stdout.read().decode())
f.write(process.stderr.read().decode())
FlameScope is a visualization tool for exploring different time ranges as Flame Graphs.
Pros of Flamescope
- Flamescope provides a web-based interface for visualizing and analyzing CPU profiles, making it easier to share and collaborate on performance analysis.
- The tool supports a wide range of profiling formats, including Linux perf, DTrace, and JFR, allowing it to be used with a variety of programming languages and environments.
- Flamescope's visualization features, such as the flame graph, provide a clear and intuitive way to identify performance bottlenecks in complex systems.
Cons of Flamescope
- Flamescope is a standalone tool, which means it requires additional setup and configuration to integrate with existing development workflows.
- The tool's web-based interface may not be as familiar or accessible to developers who prefer command-line tools or desktop applications.
- Flamescope's focus on CPU profiling may not be as useful for developers working on applications with different performance characteristics, such as I/O-bound or memory-intensive workloads.
Code Comparison
Profiling:
import profiling
with profiling.Profile():
# Your code here
pass
Flamescope:
import flamescope
with flamescope.Profile():
# Your code here
pass
The code snippets above demonstrate the basic usage of Profiling and Flamescope, respectively. Both tools provide a context manager-based API for capturing and analyzing performance data, but the underlying implementation and features may differ.
🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and not maintained.
Pros of Pyflame
- Pyflame is a Python profiler that can trace a running Python process without any code instrumentation, making it a non-invasive profiling tool.
- Pyflame supports a wide range of Python versions, from 2.7 to 3.9, making it a versatile tool for different Python environments.
- Pyflame provides detailed information about the running Python process, including function call stacks and CPU usage.
Cons of Pyflame
- Pyflame requires the
ptrace
system call, which may not be available on all platforms or may require additional permissions. - Pyflame may have performance overhead when profiling long-running or high-concurrency Python applications.
- Pyflame's output can be complex and may require some expertise to interpret, especially for large or complex Python applications.
Code Comparison
Pyflame:
import pyflame
pid = 12345
flame_graph = pyflame.Flamegraph(pid)
flame_graph.render('profile.svg')
Profiling:
from profiling import Profiler
profiler = Profiler()
profiler.start()
# Run your code here
profiler.stop()
profiler.save('profile.html')
The key difference is that Pyflame uses the ptrace
system call to trace the running Python process, while Profiling uses code instrumentation to collect profiling data. Pyflame's approach is more non-invasive, but may have some platform-specific limitations.
pprof is a tool for visualization and analysis of profiling data
Pros of google/pprof
- Supports a wide range of programming languages, including Go, C, C++, and Rust.
- Provides a comprehensive set of tools for profiling and analyzing performance data.
- Integrates well with the Go standard library's
runtime/pprof
package.
Cons of google/pprof
- Requires manual instrumentation of the code to collect profiling data.
- May have a steeper learning curve for developers unfamiliar with profiling tools.
- Limited support for visualizing and interpreting profiling data.
Code Comparison
what-studio/profiling
import profiling
@profiling.profile
def my_function(arg1, arg2):
# Function implementation
pass
google/pprof
import "runtime/pprof"
func main() {
// Start CPU profiling
pprof.StartCPUProfile(os.Stdout)
defer pprof.StopCPUProfile()
// Call the function to be profiled
myFunction(arg1, arg2)
}
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
â This project is not maintained anymore. We highly recommend switching to py-spy which provides better performance and usability.
Profiling
The profiling package is an interactive continuous Python profiler. It is inspired from Unity 3D profiler. This package provides these features:
- Profiling statistics keep the frame stack.
- An interactive TUI profiling statistics viewer.
- Provides both of statistical and deterministic profiling.
- Utilities for remote profiling.
- Thread or greenlet aware CPU timer.
- Supports Python 2.7, 3.3, 3.4 and 3.5.
- Currently supports only Linux.
Installation
Install the latest release via PyPI:
$ pip install profiling
Profiling
To profile a single program, simply run the profiling
command:
$ profiling your-program.py
Then an interactive viewer will be executed:
If your program uses greenlets, choose greenlet
timer:
$ profiling --timer=greenlet your-program.py
With --dump
option, it saves the profiling result to a file. You can
browse the saved result by using the view
subcommand:
$ profiling --dump=your-program.prf your-program.py
$ profiling view your-program.prf
If your script reads sys.argv
, append your arguments after --
.
It isolates your arguments from the profiling
command:
$ profiling your-program.py -- --your-flag --your-param=42
Live Profiling
If your program has a long life time like a web server, a profiling result
at the end of program is not helpful enough. Probably you need a continuous
profiler. It can be achived by the live-profile
subcommand:
$ profiling live-profile webserver.py
See a demo:
There's a live-profiling server also. The server doesn't profile the program at ordinary times. But when a client connects to the server, it starts to profile and reports the results to the all connected clients.
Start a profling server by the remote-profile
subcommand:
$ profiling remote-profile webserver.py --bind 127.0.0.1:8912
And also run a client for the server by the view
subcommand:
$ profiling view 127.0.0.1:8912
Statistical Profiling
TracingProfiler
, the default profiler, implements a deterministic profiler
for deep call graph. Of course, it has heavy overhead. The overhead can
pollute your profiling result or can make your application to be slow.
In contrast, SamplingProfiler
implements a statistical profiler. Like other
statistical profilers, it also has only very cheap overhead. When you profile
you can choose it by just --sampling
(shortly -S
) option:
$ profiling live-profile -S webserver.py
^^
Timeit then Profiling
Do you use timeit
to check the performance of your code?
$ python -m timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())'
1000 loops, best of 3: 722 usec per loop
If you want to profile the checked code, simply use the timeit
subcommand:
$ profiling timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())'
^^^^^^^^^
Profiling from Code
You can also profile your program by profiling.tracing.TracingProfiler
or
profiling.sampling.SamplingProfiler
directly:
from profiling.tracing import TracingProfiler
# profile your program.
profiler = TracingProfiler()
profiler.start()
... # run your program.
profiler.stop()
# or using context manager.
with profiler:
... # run your program.
# view and interact with the result.
profiler.run_viewer()
# or save profile data to file
profiler.dump('path/to/file')
Viewer Key Bindings
- q - Quit.
- space - Pause/Resume.
- \ - Toggle layout between NESTED and FLAT.
- â and â - Navigate frames.
- â - Expand the frame.
- â - Fold the frame.
- > - Go to the hotspot.
- esc - Defocus.
- [ and ] - Change sorting column.
Columns
Common
FUNCTION
- The function name with the code location.
(e.g.
my_func (my_code.py:42)
,my_func (my_module:42)
) - Only the location without line number. (e.g.
my_code.py
,my_module
)
- The function name with the code location.
(e.g.
Tracing Profiler
CALLS
- Total call count of the function.OWN
(Exclusive Time) - Total spent time in the function excluding sub calls./CALL
afterOWN
- Exclusive time per call.%
afterOWN
- Exclusive time per total spent time.DEEP
(Inclusive Time) - Total spent time in the function./CALL
afterDEEP
- Inclusive time per call.%
afterDEEP
- Inclusive time per total spent time.
Sampling Profiler
OWN
(Exclusive Samples) - Number of samples which are collected during the direct execution of the function.%
afterOWN
- Exclusive samples per number of the total samples.DEEP
(Inclusive Samples) - Number of samples which are collected during the excution of the function.%
afterDEEP
- Inclusive samples per number of the total samples.
Testing
There are some additional requirements to run the test code, which can be installed by running the following command.
$ pip install $(python test/fit_requirements.py test/requirements.txt)
Then you should be able to run pytest
.
$ pytest -v
Thanks to
- Seungmyeong Yang who suggested this project.
- Pavel
who inspired to implement
-m
option.
Licensing
Written by Heungsub Lee at What! Studio in Nexon, and distributed under the BSD 3-Clause license.
Top Related Projects
🔬 A fast, interactive web-based viewer for performance profiles.
Stack trace visualizer
FlameScope is a visualization tool for exploring different time ranges as Flame Graphs.
🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and not maintained.
pprof is a tool for visualization and analysis of profiling data
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot