Top Related Projects
Free and Open Source, Distributed, RESTful Search Engine
The Prometheus monitoring system and time series database.
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Fluentd: Unified Logging Layer (project under CNCF)
Evolving the Prometheus exposition format into a standard.
Quick Overview
Grafana Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate, as it does not index the contents of the logs, but rather a set of labels for each log stream.
Pros
- Efficient storage and querying of logs due to its unique indexing approach
- Seamless integration with Grafana for visualization and alerting
- Supports multi-tenancy, making it suitable for large organizations
- Easy to set up and operate compared to other log aggregation systems
Cons
- Limited full-text search capabilities compared to traditional log management systems
- Query language (LogQL) may have a learning curve for new users
- Performance can degrade with very high cardinality label sets
- Relatively young project compared to some established alternatives
Getting Started
To get started with Grafana Loki, you can use Docker Compose for a quick setup:
- Create a
docker-compose.yml
file with the following content:
version: "3"
services:
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
promtail:
image: grafana/promtail:latest
volumes:
- /var/log:/var/log
command: -config.file=/etc/promtail/config.yml
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
- Run the following command to start the services:
docker-compose up -d
-
Access Grafana at
http://localhost:3000
and add Loki as a data source (URL:http://loki:3100
). -
Start exploring your logs using Grafana's Explore view and Loki's LogQL query language.
Competitor Comparisons
Free and Open Source, Distributed, RESTful Search Engine
Pros of Elasticsearch
- More mature and feature-rich, with advanced full-text search capabilities
- Highly scalable and distributed architecture for handling large datasets
- Extensive ecosystem with various plugins and integrations
Cons of Elasticsearch
- Higher resource requirements and complexity in setup and maintenance
- Steeper learning curve for configuration and optimization
- More expensive for large-scale deployments compared to Loki
Code Comparison
Elasticsearch query:
GET /my-index/_search
{
"query": {
"match": {
"message": "error"
}
}
}
Loki query:
{job="myapp"} |= "error"
Both Elasticsearch and Loki are powerful tools for log management and analysis, but they have different strengths and use cases. Elasticsearch excels in complex full-text search scenarios and handling diverse data types, while Loki is designed specifically for efficient log storage and querying with a focus on simplicity and cost-effectiveness.
Elasticsearch offers more advanced querying capabilities and a wider range of data processing features, making it suitable for various use cases beyond log management. However, this versatility comes at the cost of increased complexity and resource requirements.
Loki, on the other hand, provides a more streamlined approach to log management, with a simpler setup and lower operational costs. It's particularly well-suited for organizations primarily focused on log aggregation and basic querying needs.
The Prometheus monitoring system and time series database.
Pros of Prometheus
- More mature and widely adopted monitoring system with a larger ecosystem
- Powerful query language (PromQL) for complex data analysis and alerting
- Built-in support for service discovery and auto-configuration
Cons of Prometheus
- Limited scalability for high-cardinality data and long-term storage
- Requires additional components (e.g., Thanos, Cortex) for horizontal scaling
- Less efficient for storing and querying large volumes of log data
Code Comparison
Prometheus configuration (prometheus.yml):
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
Loki configuration (loki-config.yaml):
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
Summary
Prometheus excels in metrics-based monitoring and alerting, offering a mature ecosystem and powerful query language. Loki, on the other hand, is designed specifically for log aggregation and storage, providing better scalability for high-volume log data. While Prometheus is more established, Loki offers a simpler approach to log management, especially when integrated with other Grafana tools.
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Pros of Telegraf
- More versatile: Collects metrics, events, and logs from a wide variety of sources
- Extensive plugin ecosystem: Supports numerous input, output, and processing plugins
- Lightweight and efficient: Written in Go, designed for minimal resource usage
Cons of Telegraf
- Steeper learning curve: Configuration can be complex due to its extensive options
- Less focused on log aggregation: Primarily designed for metrics collection
Code Comparison
Telegraf configuration example:
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[outputs.influxdb]]
urls = ["http://influxdb:8086"]
Loki configuration example:
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
While both repositories serve different primary purposes, they can be complementary in a monitoring stack. Telegraf excels in metrics collection and data forwarding, while Loki focuses on log aggregation and querying. Telegraf's configuration is typically more detailed due to its extensive plugin system, while Loki's configuration is more focused on its log ingestion and storage capabilities.
Fluentd: Unified Logging Layer (project under CNCF)
Pros of Fluentd
- More mature and widely adopted, with a larger ecosystem of plugins
- Supports a broader range of input and output sources out-of-the-box
- Highly flexible and customizable for complex log processing workflows
Cons of Fluentd
- Can be more resource-intensive, especially for high-volume log processing
- Configuration can be complex for advanced use cases
- Less tightly integrated with visualization tools compared to Loki's Grafana integration
Code Comparison
Fluentd configuration example:
<source>
@type tail
path /var/log/httpd-access.log
tag apache.access
</source>
<match apache.access>
@type elasticsearch
host localhost
port 9200
index_name apache-access
</match>
Loki configuration example:
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
Both Loki and Fluentd are powerful log management tools, but they serve different purposes. Fluentd excels in log collection and processing, offering extensive customization options. Loki, on the other hand, focuses on efficient log storage and querying, with tight integration into the Grafana ecosystem. The choice between them depends on specific use cases and existing infrastructure.
Evolving the Prometheus exposition format into a standard.
Pros of OpenMetrics
- Focuses on standardizing metrics format, making it easier to integrate with various monitoring systems
- Provides a well-defined specification for metric exposition
- Supports richer metadata and exemplars for more detailed metric information
Cons of OpenMetrics
- Limited to metrics only, not a full observability solution like Loki
- Less active development and community compared to Loki
- Lacks built-in visualization and querying capabilities
Code Comparison
OpenMetrics example:
# HELP http_requests_total Total number of HTTP requests
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
Loki example (LogQL query):
{job="varlogs"} |= "error" | json | rate[5m] > 0.2
While OpenMetrics focuses on standardizing metric exposition, Loki provides a more comprehensive log aggregation and querying solution. OpenMetrics is ideal for organizations looking to standardize their metric format across different systems, while Loki offers a full-featured log management platform with powerful querying capabilities.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Loki: like Prometheus, but for logs.
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
Compared to other log aggregation systems, Loki:
- does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run.
- indexes and groups log streams using the same labels youâre already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that youâre already using with Prometheus.
- is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed.
- has native support in Grafana (needs Grafana v6.0).
A Loki-based logging stack consists of 3 components:
promtail
is the agent, responsible for gathering logs and sending them to Loki.loki
is the main server, responsible for storing logs and processing queries.- Grafana for querying and displaying the logs.
Note that Promtail is considered to be feature complete, and future development for logs collection will be in Grafana Alloy
Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
Getting started
Upgrading
Documentation
- Latest release
- Upcoming release, at the tip of the main branch
Commonly used sections:
- API documentation for getting logs into Loki.
- Labels
- Operations
- Promtail is an agent which tails log files and pushes them to Loki.
- Pipelines details the log processing pipeline.
- Docker Driver Client is a Docker plugin to send logs directly to Loki from Docker containers.
- LogCLI provides a command-line interface for querying logs.
- Loki Canary monitors your Loki installation for missing logs.
- Troubleshooting presents help dealing with error messages.
- Loki in Grafana describes how to set up a Loki datasource in Grafana.
Getting Help
If you have any questions or feedback regarding Loki:
- Search existing thread in the Grafana Labs community forum for Loki: https://community.grafana.com
- Ask a question on the Loki Slack channel. To invite yourself to the Grafana Slack, visit https://slack.grafana.com/ and join the #loki channel.
- File an issue for bugs, issues and feature suggestions.
- Send an email to lokiproject@googlegroups.com, or use the web interface.
- UI issues should be filed directly in Grafana.
Your feedback is always welcome.
Further Reading
- The original design doc for Loki is a good source for discussion of the motivation and design decisions.
- Callum Styan's March 2019 DevOpsDays Vancouver talk "Grafana Loki: Log Aggregation for Incident Investigations".
- Grafana Labs blog post "How We Designed Loki to Work Easily Both as Microservices and as Monoliths".
- Tom Wilkie's early-2019 CNCF Paris/FOSDEM talk "Grafana Loki: like Prometheus, but for logs" (slides, video).
- David Kaltschmidt's KubeCon 2018 talk "On the OSS Path to Full Observability with Grafana" (slides, video) on how Loki fits into a cloud-native environment.
- Goutham Veeramachaneni's blog post "Loki: Prometheus-inspired, open source logging for cloud natives" on details of the Loki architecture.
- David Kaltschmidt's blog post "Closer look at Grafana's user interface for Loki" on the ideas that went into the logging user interface.
Contributing
Refer to CONTRIBUTING.md
Building from source
Loki can be run in a single host, no-dependencies mode using the following commands.
You need go
, we recommend using the version found in our build Dockerfile
$ go get github.com/grafana/loki
$ cd $GOPATH/src/github.com/grafana/loki # GOPATH is $HOME/go by default.
$ go build ./cmd/loki
$ ./loki -config.file=./cmd/loki/loki-local-config.yaml
...
To build Promtail on non-Linux platforms, use the following command:
$ go build ./clients/cmd/promtail
On Linux, Promtail requires the systemd headers to be installed if
Journal support is enabled.
To enable Journal support the go build tag flag promtail_journal_enabled
should be passed
With Journal support on Ubuntu, run with the following commands:
$ sudo apt install -y libsystemd-dev
$ go build --tags=promtail_journal_enabled ./clients/cmd/promtail
With Journal support on CentOS, run with the following commands:
$ sudo yum install -y systemd-devel
$ go build --tags=promtail_journal_enabled ./clients/cmd/promtail
Otherwise, to build Promtail without Journal support, run go build
with CGO disabled:
$ CGO_ENABLED=0 go build ./clients/cmd/promtail
Adopters
Please see ADOPTERS.md for some of the organizations using Loki today. If you would like to add your organization to the list, please open a PR to add it to the list.
License
Grafana Loki is distributed under AGPL-3.0-only. For Apache-2.0 exceptions, see LICENSING.md.
Top Related Projects
Free and Open Source, Distributed, RESTful Search Engine
The Prometheus monitoring system and time series database.
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Fluentd: Unified Logging Layer (project under CNCF)
Evolving the Prometheus exposition format into a standard.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot