Top Related Projects
Like Prometheus, but for logs.
Free and Open Source, Distributed, RESTful Search Engine
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
X-Ray Vision for your infrastructure!
Evolving the Prometheus exposition format into a standard.
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
Quick Overview
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. It is now a standalone project and is maintained independently of any company. Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
Pros
- Powerful and flexible query language (PromQL) for data analysis
- Multi-dimensional data model with time series data identified by metric name and key/value pairs
- No reliance on distributed storage; single server nodes are autonomous
- Supports multiple modes of graphing and dashboarding
Cons
- Scalability can be challenging for very large deployments
- Limited long-term storage options out of the box
- Steeper learning curve compared to some other monitoring solutions
- Not ideal for 100% accuracy (e.g., for billing systems) due to its sampling approach
Code Examples
- Querying Prometheus using PromQL:
# Query to get the average CPU usage over the last 5 minutes
avg(rate(node_cpu_seconds_total{mode="user"}[5m])) by (instance)
- Configuring a simple scrape job in prometheus.yml:
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
- Defining an alert rule:
groups:
- name: example
rules:
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
for: 5m
labels:
severity: warning
annotations:
summary: High CPU usage detected
Getting Started
To get started with Prometheus:
- Download the latest release from the official GitHub repository.
- Extract the files and navigate to the directory.
- Create a basic configuration file named
prometheus.yml
:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- Run Prometheus with the configuration file:
./prometheus --config.file=prometheus.yml
- Access the Prometheus web interface at
http://localhost:9090
.
For more detailed instructions and advanced configurations, refer to the official documentation.
Competitor Comparisons
Like Prometheus, but for logs.
Pros of Loki
- Designed specifically for log aggregation, making it more efficient for log data
- Supports multi-tenancy out of the box
- Integrates seamlessly with other Grafana ecosystem tools
Cons of Loki
- Less mature and has a smaller community compared to Prometheus
- Limited query language (LogQL) compared to PromQL
- May require more storage for high-cardinality log data
Code Comparison
Prometheus (promql):
rate(http_requests_total{job="api-server"}[5m])
Loki (logql):
rate({job="api-server"} |= "http_request" [5m])
Both examples show querying for HTTP request rates, but Loki's query focuses on log entries containing "http_request" while Prometheus uses a specific metric.
Prometheus is a robust metrics-based monitoring system with a powerful query language, while Loki is tailored for log aggregation and storage. Loki's design allows for more efficient log handling, but it may not be as versatile for general metrics. The choice between the two depends on specific monitoring needs and existing infrastructure.
Free and Open Source, Distributed, RESTful Search Engine
Pros of Elasticsearch
- More versatile for full-text search and complex queries
- Better suited for handling large volumes of unstructured data
- Provides a rich set of analytics and visualization tools (Kibana)
Cons of Elasticsearch
- Higher resource consumption and complexity
- Steeper learning curve for setup and configuration
- Less efficient for simple time-series data collection and querying
Code Comparison
Elasticsearch query example:
GET /my_index/_search
{
"query": {
"match": {
"title": "elasticsearch"
}
}
}
Prometheus query example:
http_requests_total{status!~"4.."}
Summary
Elasticsearch excels in complex search scenarios and handling diverse data types, while Prometheus is optimized for time-series data and metrics collection. Elasticsearch offers powerful analytics but requires more resources and setup effort. Prometheus provides a simpler approach for monitoring and alerting, particularly in containerized environments.
The choice between these tools depends on specific use cases: Elasticsearch for comprehensive search and analytics, Prometheus for straightforward metrics monitoring and alerting. Many organizations use both tools complementarily to leverage their respective strengths.
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Pros of Telegraf
- More flexible data collection with support for a wider range of input plugins
- Easier integration with InfluxDB and other time-series databases
- Lightweight and efficient, with low resource consumption
Cons of Telegraf
- Less powerful querying capabilities compared to Prometheus' PromQL
- Lacks built-in alerting functionality
- Requires additional setup for visualization (e.g., Grafana)
Code Comparison
Telegraf configuration (telegraf.conf):
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
Prometheus configuration (prometheus.yml):
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
Both configurations are relatively simple, but Telegraf's plugin-based approach allows for more granular control over data collection. Prometheus focuses on a pull-based model with service discovery, while Telegraf supports both push and pull models.
Telegraf excels in data collection flexibility and ease of integration with various systems, making it suitable for diverse monitoring scenarios. Prometheus offers a more comprehensive monitoring solution with built-in querying and alerting capabilities, making it ideal for Kubernetes environments and microservices architectures.
X-Ray Vision for your infrastructure!
Pros of Netdata
- Real-time monitoring with per-second granularity
- Easy installation and auto-configuration
- Lightweight and efficient resource usage
Cons of Netdata
- Limited long-term data storage capabilities
- Less extensive ecosystem of exporters and integrations
- Fewer advanced querying and alerting features
Code Comparison
Netdata configuration (netdata.conf):
[global]
update every = 1
memory mode = ram
Prometheus configuration (prometheus.yml):
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
Key Differences
-
Data Collection: Netdata focuses on real-time, high-frequency data collection, while Prometheus uses a pull-based model with longer intervals.
-
Storage: Netdata primarily stores data in memory for short-term analysis, whereas Prometheus offers more robust long-term storage options.
-
Scalability: Prometheus is designed for large-scale distributed systems, while Netdata excels in single-node or small cluster monitoring.
-
Query Language: Prometheus uses PromQL for powerful data querying and analysis, while Netdata relies on simpler filtering and aggregation methods.
-
Ecosystem: Prometheus has a larger ecosystem of exporters and integrations, making it more versatile for diverse monitoring needs.
Evolving the Prometheus exposition format into a standard.
Pros of OpenMetrics
- Focuses on standardizing metrics exposition format
- Aims for broader adoption across different monitoring systems
- Provides a more detailed specification for metric types and metadata
Cons of OpenMetrics
- Smaller community and fewer contributors compared to Prometheus
- Less mature and still in development
- Limited tooling and ecosystem support
Code Comparison
OpenMetrics example:
# TYPE http_requests_total counter
# UNIT http_requests_total requests
http_requests_total{method="post",code="200"} 1027 1395066363000
Prometheus example:
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027
Key Differences
- OpenMetrics focuses on standardization, while Prometheus is a complete monitoring system.
- OpenMetrics provides more detailed metadata and unit information.
- Prometheus has a larger ecosystem and more mature tooling.
- OpenMetrics aims for broader adoption across different monitoring systems.
- Prometheus is more widely used and has a larger community of contributors.
Both projects are valuable in the monitoring space, with OpenMetrics aiming to create a universal standard for metrics exposition, while Prometheus offers a full-featured monitoring and alerting toolkit.
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
Pros of VictoriaMetrics
- Higher performance and better resource efficiency
- Easier scalability and cluster management
- Built-in data compression for reduced storage costs
Cons of VictoriaMetrics
- Less mature ecosystem and community support
- Fewer integrations with third-party tools
- Some features may require commercial licensing
Code Comparison
VictoriaMetrics query example:
sum(rate(http_requests_total{status="200"}[5m])) by (instance)
Prometheus query example:
sum(rate(http_requests_total{status="200"}[5m])) by (instance)
Both systems use PromQL for querying, so the syntax is often identical. However, VictoriaMetrics extends PromQL with additional functions and optimizations.
Key Differences
- Architecture: Prometheus uses a pull-based model, while VictoriaMetrics supports both push and pull.
- Storage: VictoriaMetrics uses a custom storage engine optimized for time-series data.
- Scalability: VictoriaMetrics is designed for horizontal scalability out of the box.
- Data retention: VictoriaMetrics offers longer data retention periods with less storage overhead.
- Query performance: VictoriaMetrics generally provides faster query execution, especially for high-cardinality data.
Both systems have their strengths, and the choice between them depends on specific use cases and requirements.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME

Prometheus
Visit prometheus.io for the full documentation, examples and guides.
Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.
The features that distinguish Prometheus from other metrics and monitoring systems are:
- A multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
- PromQL, a powerful and flexible query language to leverage this dimensionality
- No dependency on distributed storage; single server nodes are autonomous
- An HTTP pull model for time series collection
- Pushing time series is supported via an intermediary gateway for batch jobs
- Targets are discovered via service discovery or static configuration
- Multiple modes of graphing and dashboarding support
- Support for hierarchical and horizontal federation
Architecture overview
Install
There are various ways of installing Prometheus.
Precompiled binaries
Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Prometheus. See the Installing chapter in the documentation for all the details.
Docker images
Docker images are available on Quay.io or Docker Hub.
You can launch a Prometheus container for trying it out with
docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus
Prometheus will now be reachable at http://localhost:9090/.
Building from source
To build Prometheus from source code, You need:
- Go version 1.22 or greater.
- NodeJS version 22 or greater.
- npm version 8 or greater.
Start by cloning the repository:
git clone https://github.com/prometheus/prometheus.git
cd prometheus
You can use the go
tool to build and install the prometheus
and promtool
binaries into your GOPATH
:
GO111MODULE=on go install github.com/prometheus/prometheus/cmd/...
prometheus --config.file=your_config.yml
However, when using go install
to build Prometheus, Prometheus will expect to be able to
read its web assets from local filesystem directories under web/ui/static
and
web/ui/templates
. In order for these assets to be found, you will have to run Prometheus
from the root of the cloned repository. Note also that these directories do not include the
React UI unless it has been built explicitly using make assets
or make build
.
An example of the above configuration file can be found here.
You can also build using make build
, which will compile in the web assets so that
Prometheus can be run from anywhere:
make build
./prometheus --config.file=your_config.yml
The Makefile provides several targets:
- build: build the
prometheus
andpromtool
binaries (includes building and compiling in web assets) - test: run the tests
- test-short: run the short tests
- format: format the source code
- vet: check the source code for common errors
- assets: build the React UI
Service discovery plugins
Prometheus is bundled with many service discovery plugins. When building Prometheus from source, you can edit the plugins.yml file to disable some service discoveries. The file is a yaml-formatted list of go import path that will be built into the Prometheus binary.
After you have changed the file, you
need to run make build
again.
If you are using another method to compile Prometheus, make plugins
will
generate the plugins file accordingly.
If you add out-of-tree plugins, which we do not endorse at the moment,
additional steps might be needed to adjust the go.mod
and go.sum
files. As
always, be extra careful when loading third party code.
Building the Docker image
You can build a docker image locally with the following commands:
make promu
promu crossbuild -p linux/amd64
make npm_licenses
make common-docker-amd64
The make docker
target is intended only for use in our CI system and will not
produce a fully working image when run locally.
Using Prometheus as a Go Library
Remote Write
We are publishing our Remote Write protobuf independently at buf.build.
You can use that as a library:
go get buf.build/gen/go/prometheus/prometheus/protocolbuffers/go@latest
This is experimental.
Prometheus code base
In order to comply with go mod rules, Prometheus release number do not exactly match Go module releases.
For the Prometheus v3.y.z releases, we are publishing equivalent v0.3y.z tags. The y in v0.3y.z is always padded to two digits, with a leading zero if needed.
Therefore, a user that would want to use Prometheus v3.0.0 as a library could do:
go get github.com/prometheus/prometheus@v0.300.0
For the Prometheus v2.y.z releases, we published the equivalent v0.y.z tags.
Therefore, a user that would want to use Prometheus v2.35.0 as a library could do:
go get github.com/prometheus/prometheus@v0.35.0
This solution makes it clear that we might break our internal Go APIs between minor user-facing releases, as breaking changes are allowed in major version zero.
React UI Development
For more information on building, running, and developing on the React-based UI, see the React app's README.md.
More information
- Godoc documentation is available via pkg.go.dev. Due to peculiarities of Go Modules, v3.y.z will be displayed as v0.3y.z (the y in v0.3y.z is always padded to two digits, with a leading zero if needed), while v2.y.z will be displayed as v0.y.z.
- See the Community page for how to reach the Prometheus developers and users on various communication channels.
Contributing
Refer to CONTRIBUTING.md
License
Apache License 2.0, see LICENSE.
Top Related Projects
Like Prometheus, but for logs.
Free and Open Source, Distributed, RESTful Search Engine
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
X-Ray Vision for your infrastructure!
Evolving the Prometheus exposition format into a standard.
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot