Top Related Projects
Stack trace visualizer
Sampling CPU and HEAP profiler for Java featuring AsyncGetCallTrace + perf_events
Main gperftools repository
felixge's notes on the various go profiling methods that are available.
Stochastic flame graph profiler for Go programs
Vector is an on-host performance monitoring framework which exposes hand picked high resolution metrics to every engineer’s browser.
Quick Overview
pprof is a tool for visualization and analysis of profiling data. It's primarily used for analyzing and visualizing CPU and memory usage of Go programs, but can also be used with other languages. pprof helps developers identify performance bottlenecks and optimize their code.
Pros
- Powerful visualization capabilities, including graph, flame graph, and source code views
- Supports multiple profiling types (CPU, memory, goroutine, etc.)
- Integrates well with Go's built-in profiling tools
- Can be used both as a command-line tool and as a web interface
Cons
- Learning curve can be steep for beginners
- Some advanced features may require additional setup or dependencies
- Primarily focused on Go, though it can be used with other languages with some effort
- Large profiles can be resource-intensive to analyze
Code Examples
- Enabling CPU profiling in a Go program:
import (
"os"
"runtime/pprof"
)
func main() {
f, _ := os.Create("cpu.prof")
defer f.Close()
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
// Your program logic here
}
- Enabling memory profiling in a Go program:
import (
"os"
"runtime/pprof"
)
func main() {
f, _ := os.Create("mem.prof")
defer f.Close()
defer pprof.WriteHeapProfile(f)
// Your program logic here
}
- Using the HTTP pprof endpoint in a web server:
import (
"net/http"
_ "net/http/pprof"
)
func main() {
go func() {
http.ListenAndServe("localhost:6060", nil)
}()
// Your web server logic here
}
Getting Started
To use pprof with a Go program:
- Install Go and set up your Go environment.
- Add profiling code to your program (see examples above).
- Run your program to generate profiling data.
- Analyze the data using the pprof tool:
go tool pprof [binary] [profile_file]
For web interface:
go tool pprof -http=:8080 [binary] [profile_file]
This will open a web browser with the pprof interface, allowing you to explore and analyze your profiling data visually.
Competitor Comparisons
Stack trace visualizer
Pros of FlameGraph
- Provides visually intuitive and interactive flame graphs for performance analysis
- Language-agnostic and can work with various profiling data formats
- Lightweight and easy to integrate into existing workflows
Cons of FlameGraph
- Requires additional steps to generate flame graphs from raw profiling data
- Less comprehensive tooling compared to pprof's integrated analysis features
- May require more manual interpretation of results
Code Comparison
FlameGraph (Perl):
my %Node;
my $last = {};
while (<>) {
chomp;
my ($stack, $samples) = split ";";
my @frames = split ",", $stack;
flamegraph_add(\%Node, \@frames, $samples);
}
pprof (Go):
func (p *Profile) Parse(r io.Reader) error {
b, err := ioutil.ReadAll(r)
if err != nil {
return err
}
return p.ParseData(b)
}
Both tools serve different purposes in the performance analysis ecosystem. FlameGraph excels at creating visual representations of profiling data, while pprof offers a more comprehensive suite of analysis tools. The choice between them depends on specific project needs and preferences for visualization vs. detailed analysis capabilities.
Sampling CPU and HEAP profiler for Java featuring AsyncGetCallTrace + perf_events
Pros of async-profiler
- Designed specifically for Java/JVM profiling, offering better integration and performance for Java applications
- Supports both CPU and allocation profiling with minimal overhead
- Provides flame graph visualization out of the box
Cons of async-profiler
- Limited to Java/JVM environments, while pprof is more language-agnostic
- Less extensive tooling and ecosystem compared to pprof
- May require more setup and configuration for non-standard JVM environments
Code Comparison
async-profiler:
AsyncProfiler profiler = AsyncProfiler.getInstance();
profiler.start(EventType.CPU, 10000000);
// ... code to profile ...
profiler.stop();
profiler.dump(new FileOutputStream("profile.html"));
pprof:
import "runtime/pprof"
f, _ := os.Create("cpu.prof")
pprof.StartCPUProfile(f)
// ... code to profile ...
pprof.StopCPUProfile()
While both tools serve profiling purposes, async-profiler is tailored for Java environments, offering specific JVM-related features. pprof, on the other hand, is more versatile and can be used across different languages and platforms. The code examples demonstrate the simplicity of usage for both tools, with async-profiler requiring slightly more configuration due to its Java-specific nature.
Main gperftools repository
Pros of gperftools
- More comprehensive suite of performance tools, including CPU profiler, heap profiler, and heap checker
- Provides a tcmalloc memory allocator for improved performance
- Supports multiple programming languages beyond Go
Cons of gperftools
- Larger and more complex codebase, potentially harder to integrate
- Less focused on visualization and analysis compared to pprof
- May have higher overhead due to its broader feature set
Code Comparison
pprof (Go):
import "net/http/pprof"
func main() {
http.HandleFunc("/debug/pprof/", pprof.Index)
http.ListenAndServe(":8080", nil)
}
gperftools (C++):
#include <gperftools/profiler.h>
int main() {
ProfilerStart("my_profile.prof");
// ... your code here ...
ProfilerStop();
return 0;
}
Summary
pprof is a lightweight, Go-focused profiling tool with excellent visualization capabilities, while gperftools offers a more comprehensive suite of performance tools for multiple languages. pprof is easier to integrate into Go projects, while gperftools provides additional features like tcmalloc and heap checking. The choice between them depends on your specific language requirements and profiling needs.
felixge's notes on the various go profiling methods that are available.
Pros of go-profiler-notes
- Comprehensive documentation and explanations about Go profiling
- Covers a wide range of profiling topics, including CPU, memory, and goroutine profiling
- Provides practical examples and best practices for profiling Go applications
Cons of go-profiler-notes
- Not an actual profiling tool, but rather a collection of notes and information
- May require more effort to implement profiling techniques compared to using pprof directly
- Less actively maintained compared to pprof
Code Comparison
pprof:
import "net/http/pprof"
func main() {
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
// Rest of the application code
}
go-profiler-notes (example from the repository):
import "runtime/pprof"
func main() {
f, _ := os.Create("cpu.pprof")
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
// Rest of the application code
}
While pprof is a tool for profiling Go programs, go-profiler-notes is a repository containing detailed information about profiling in Go. pprof provides a ready-to-use profiling solution, whereas go-profiler-notes offers in-depth knowledge and guidance on various profiling techniques and their implementation.
Stochastic flame graph profiler for Go programs
Pros of go-torch
- Generates visually appealing flame graphs for easy performance analysis
- Integrates well with Uber's internal tooling and workflows
- Lightweight and focused specifically on flame graph generation
Cons of go-torch
- No longer actively maintained (archived repository)
- Limited to flame graph visualization, lacking broader profiling features
- Requires external dependencies (Brendan Gregg's FlameGraph tools)
Code comparison
pprof:
import "github.com/google/pprof/profile"
prof, err := profile.Parse(reader)
if err != nil {
log.Fatal(err)
}
go-torch:
import "github.com/uber/go-torch"
f := torch.New()
f.AddFile(profilePath)
f.Print()
Summary
pprof is a comprehensive profiling tool maintained by Google, offering a wide range of profiling capabilities for Go programs. It provides detailed analysis and various visualization options.
go-torch, developed by Uber, focuses specifically on generating flame graphs for performance visualization. While it offers a simpler approach to flame graph generation, it's no longer actively maintained and has more limited functionality compared to pprof.
For most Go profiling needs, pprof is the recommended choice due to its active development, broader feature set, and integration with the Go ecosystem. However, go-torch may still be useful for quick flame graph generation in specific scenarios.
Vector is an on-host performance monitoring framework which exposes hand picked high resolution metrics to every engineer’s browser.
Pros of Vector
- Provides real-time system performance monitoring and visualization
- Offers a user-friendly web-based interface for easy data exploration
- Supports multiple data sources and custom metrics
Cons of Vector
- More complex setup and configuration compared to pprof
- Requires additional infrastructure to run and maintain
- May have higher resource overhead for continuous monitoring
Code Comparison
Vector (Node.js example):
const vector = require('vector');
vector.start({
port: 8080,
metrics: ['cpu', 'memory', 'disk']
});
pprof (Go example):
import "net/http/pprof"
func main() {
go func() {
http.ListenAndServe("localhost:6060", nil)
}()
}
Vector focuses on continuous system-wide monitoring with a rich web interface, while pprof is primarily designed for on-demand profiling of Go programs. Vector offers more flexibility in data sources and visualization but requires more setup. pprof is simpler to integrate into Go applications but has limited scope outside of Go profiling. The choice between them depends on the specific monitoring and profiling needs of your project.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Introduction
pprof is a tool for visualization and analysis of profiling data.
pprof reads a collection of profiling samples in profile.proto format and generates reports to visualize and help analyze the data. It can generate both text and graphical reports (through the use of the dot visualization package).
profile.proto is a protocol buffer that describes a set of callstacks and symbolization information. A common usage is to represent a set of sampled callstacks from statistical profiling. The format is described on the proto/profile.proto file. For details on protocol buffers, see https://developers.google.com/protocol-buffers
Profiles can be read from a local file, or over http. Multiple profiles of the same type can be aggregated or compared.
If the profile samples contain machine addresses, pprof can symbolize them through the use of the native binutils tools (addr2line and nm).
This is not an official Google product.
Building pprof
Prerequisites:
-
Go development kit of a supported version. Follow these instructions to prepare the environment.
-
Graphviz: http://www.graphviz.org/ Optional, used to generate graphic visualizations of profiles
To build and install it:
go install github.com/google/pprof@latest
The binary will be installed $GOPATH/bin
($HOME/go/bin
by default).
Basic usage
pprof can read a profile from a file or directly from a server via http. Specify the profile input(s) in the command line, and use options to indicate how to format the report.
Generate a text report of the profile, sorted by hotness:
% pprof -top [main_binary] profile.pb.gz
Where
main_binary: Local path to the main program binary, to enable symbolization
profile.pb.gz: Local path to the profile in a compressed protobuf, or
URL to the http service that serves a profile.
Generate a graph in an SVG file, and open it with a web browser:
pprof -web [main_binary] profile.pb.gz
Run pprof on interactive mode:
If no output formatting option is specified, pprof runs on interactive mode, where reads the profile and accepts interactive commands for visualization and refinement of the profile.
pprof [main_binary] profile.pb.gz
This will open a simple shell that takes pprof commands to generate reports.
Type 'help' for available commands/options.
Run pprof via a web interface
If the -http
flag is specified, pprof starts a web server at
the specified host:port that provides an interactive web-based interface to pprof.
Host is optional, and is "localhost" by default. Port is optional, and is a
random available port by default. -http=":"
starts a server locally at
a random port.
pprof -http=[host]:[port] [main_binary] profile.pb.gz
The preceding command should automatically open your web browser at the right page; if not, you can manually visit the specified port in your web browser.
Using pprof with Linux Perf
pprof can read perf.data
files generated by the
Linux perf tool by using the
perf_to_profile
program from the
perf_data_converter package.
Viewing disassembly on Windows
To view disassembly of profiles collected from Go programs compiled as Windows executables,
the executable must be built with go build -buildmode=exe
. LLVM or GCC must be installed,
so required tools like addr2line
and nm
are available to pprof
.
Further documentation
See doc/README.md for more detailed end-user documentation.
See CONTRIBUTING.md for contribution documentation.
See proto/README.md for a description of the profile.proto format.
Top Related Projects
Stack trace visualizer
Sampling CPU and HEAP profiler for Java featuring AsyncGetCallTrace + perf_events
Main gperftools repository
felixge's notes on the various go profiling methods that are available.
Stochastic flame graph profiler for Go programs
Vector is an on-host performance monitoring framework which exposes hand picked high resolution metrics to every engineer’s browser.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot