Top Related Projects
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
The Prometheus monitoring system and time series database.
:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Architected for speed. Automated for easy. Monitoring and troubleshooting, transformed!
Quick Overview
The DataDog/datadog-agent repository contains the source code for the Datadog Agent, a lightweight software application that collects metrics, logs, and traces from hosts and sends them to Datadog for monitoring and analysis. It supports various platforms and integrations, allowing users to gain insights into their infrastructure and application performance.
Pros
- Comprehensive monitoring solution covering metrics, logs, and traces
- Extensive integration support for various technologies and platforms
- Highly customizable with the ability to create custom checks and integrations
- Active development and regular updates from Datadog and the community
Cons
- Can be resource-intensive on systems with many integrations enabled
- Configuration can be complex for advanced use cases
- Some features may require a paid Datadog subscription
- Learning curve for users new to infrastructure monitoring
Getting Started
To install the Datadog Agent on a Linux system:
DD_API_KEY=<YOUR_API_KEY> DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script.sh)"
For other platforms, refer to the official Datadog documentation for specific installation instructions.
After installation, configure the agent by editing the datadog.yaml
file:
api_key: <YOUR_API_KEY>
site: datadoghq.com
logs_enabled: true
apm_config:
enabled: true
Start the agent:
sudo systemctl start datadog-agent
For more detailed configuration and usage instructions, refer to the official Datadog documentation.
Competitor Comparisons
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Pros of Grafana
- Open-source and highly customizable visualization platform
- Supports a wide range of data sources and integrations
- Large community and extensive plugin ecosystem
Cons of Grafana
- Requires more setup and configuration compared to Datadog
- Less out-of-the-box monitoring capabilities for specific technologies
- May need additional tools for complete observability solution
Code Comparison
Grafana (JavaScript):
import { PanelPlugin } from '@grafana/data';
import { SimplePanel } from './SimplePanel';
export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setPanelOptions(builder => {
return builder.addTextInput({
path: 'text',
name: 'Simple text option',
description: 'Description of panel option',
defaultValue: 'Default value of text input option',
});
});
Datadog Agent (Go):
import (
"github.com/DataDog/datadog-agent/pkg/collector/check"
"github.com/DataDog/datadog-agent/pkg/util/log"
)
func init() {
core.RegisterCheck("my_check", MyCheckFactory)
}
func MyCheckFactory() check.Check {
return &MyCheck{
CheckBase: core.NewCheckBase("my_check"),
}
}
The code snippets showcase the different approaches: Grafana focuses on panel plugin development, while Datadog Agent emphasizes check implementation for data collection.
The Prometheus monitoring system and time series database.
Pros of Prometheus
- Open-source and free to use, with a large community for support and contributions
- Flexible query language (PromQL) for powerful data analysis and alerting
- Built-in service discovery for dynamic environments
Cons of Prometheus
- Requires more setup and configuration compared to Datadog's agent-based approach
- Limited long-term storage options without additional components
- Less out-of-the-box integrations for various services and platforms
Code Comparison
Prometheus configuration (prometheus.yml):
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'example'
static_configs:
- targets: ['localhost:8080']
Datadog Agent configuration (datadog.yaml):
api_key: <YOUR_API_KEY>
logs_enabled: true
apm_config:
enabled: true
The Prometheus configuration focuses on defining scrape targets and intervals, while the Datadog Agent configuration emphasizes API key setup and enabling specific features. Prometheus requires more manual configuration for data collection, whereas Datadog Agent provides a more streamlined setup process with its agent-based approach.
:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
Pros of Beats
- Open-source and highly customizable
- Supports a wide range of data sources and integrations
- Lightweight and efficient resource usage
Cons of Beats
- Steeper learning curve for configuration and deployment
- Less comprehensive out-of-the-box monitoring solutions
- May require additional setup for advanced features
Code Comparison
Beats configuration example:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
output.elasticsearch:
hosts: ["localhost:9200"]
Datadog Agent configuration example:
logs:
- type: file
path: /var/log/*.log
service: myapp
source: python
api_key: your_api_key_here
Both configurations demonstrate log collection, but Beats offers more granular control over input types and output destinations, while Datadog Agent provides a simpler setup with built-in service and source tagging.
Beats is highly modular and customizable, making it suitable for complex environments with specific requirements. Datadog Agent offers a more streamlined experience with integrated features and easier setup for general monitoring needs.
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Pros of Telegraf
- Open-source and free to use, with a large community contributing plugins
- Supports a wide range of input and output plugins, making it highly versatile
- Lightweight and efficient, with low resource consumption
Cons of Telegraf
- Requires more manual configuration and setup compared to Datadog Agent
- Less comprehensive out-of-the-box monitoring features for complex environments
- Limited built-in visualization and alerting capabilities
Code Comparison
Telegraf configuration (TOML):
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
Datadog Agent configuration (YAML):
init_config:
instances:
- {}
Both agents use configuration files, but Telegraf uses TOML format while Datadog Agent uses YAML. Telegraf's configuration is more detailed and allows for fine-grained control over data collection, while Datadog Agent's configuration is simpler and relies more on built-in defaults.
Telegraf is highly customizable and supports a wide range of data sources and outputs, making it suitable for various monitoring scenarios. Datadog Agent, on the other hand, offers a more integrated and user-friendly experience with its SaaS platform, providing advanced features like APM and log management out of the box.
Architected for speed. Automated for easy. Monitoring and troubleshooting, transformed!
Pros of netdata
- Open-source and free to use, with no licensing costs
- Highly customizable and extensible through plugins
- Real-time monitoring with per-second granularity
Cons of netdata
- Requires more manual configuration and setup compared to Datadog
- Limited built-in integrations with cloud services and third-party tools
- May require additional resources for long-term data storage and analysis
Code Comparison
netdata configuration example:
[global]
update every = 1
memory mode = ram
history = 3600
access log = none
error log = syslog
Datadog Agent configuration example:
api_key: your_api_key_here
logs_enabled: true
apm_config:
enabled: true
process_config:
enabled: true
Both projects offer powerful monitoring capabilities, but netdata focuses on real-time, highly granular data collection with a self-hosted approach, while Datadog Agent provides a more comprehensive, cloud-based solution with extensive integrations and out-of-the-box functionality.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Datadog Agent
The present repository contains the source code of the Datadog Agent version 7 and version 6. Please refer to the Agent user documentation for information about differences between Agent v5, Agent v6 and Agent v7. Additionally, we provide a list of prepackaged binaries for an easy install process here
Note: the source code of Datadog Agent v5 is located in the dd-agent repository.
Documentation
The general documentation of the project, including instructions for installation and development, is located under the docs directory of the present repo.
Getting started
To build the Agent you need:
- Go 1.22 or later. You'll also need to set your
$GOPATH
and have$GOPATH/bin
in your path. - Python 3.11+ along with development libraries for tooling. You will also need Python 2.7 if you are building the Agent with Python 2 support.
- Python dependencies. You may install these with
pip install -r requirements.txt
This will also pull in Invoke if not yet installed. - CMake version 3.12 or later and a C++ compiler
Note: you may want to use a python virtual environment to avoid polluting your
system-wide python environment with the agent build/dev dependencies. You can
create a virtual environment using virtualenv
and then use the invoke agent.build
parameters --python-home-2=<venv_path>
and/or --python-home-3=<venv_path>
(depending on the python versions you are using) to use the virtual environment's
interpreter and libraries. By default, this environment is only used for dev dependencies
listed in requirements.txt
.
Note: You may have previously installed invoke
via brew on MacOS, or pip
in
any other platform. We recommend you use the version pinned in the requirements
file for a smooth development/build experience.
Note: You can enable auto completion for invoke tasks. Use the command below to add the appropriate line to your .zshrc
file.
echo "source <(inv --print-completion-script zsh)" >> ~/.zshrc
Builds and tests are orchestrated with invoke
, type invoke --list
on a shell
to see the available tasks.
To start working on the Agent, you can build the main
branch:
-
Checkout the repo:
git clone https://github.com/DataDog/datadog-agent.git $GOPATH/src/github.com/DataDog/datadog-agent
. -
cd into the project folder:
cd $GOPATH/src/github.com/DataDog/datadog-agent
. -
Install go tools:
invoke install-tools
(if you have a timeout error, you might need to prepend theGOPROXY=https://proxy.golang.org,https://goproxy.io,direct
env var to the command). -
Create a development
datadog.yaml
configuration file indev/dist/datadog.yaml
, containing a valid API key:api_key: <API_KEY>
. You can either start with an empty one or use the full one generated by the Agent build from Step 5 (located incmd/agent/dist/datadog.yaml
after the build finishes). -
Build the agent with
invoke agent.build --build-exclude=systemd
.By default, the Agent will be built to use Python 3 but you can select which Python version you want to use:
invoke agent.build --python-runtimes 2
for Python2 onlyinvoke agent.build --python-runtimes 3
for Python3 onlyinvoke agent.build --python-runtimes 2,3
for both Python2 and Python3
You can specify a custom Python location for the agent (useful when using virtualenvs):
invoke agent.build \ --python-runtimes 2,3 \ --python-home-2=$GOPATH/src/github.com/DataDog/datadog-agent/venv2 \ --python-home-3=$GOPATH/src/github.com/DataDog/datadog-agent/venv3
Running
invoke agent.build
:- Discards any changes done in
bin/agent/dist
. - Builds the Agent and writes the binary to
bin/agent/agent
. - Copies files from
dev/dist
tobin/agent/dist
. Seehttps://github.com/DataDog/datadog-agent/blob/main/dev/dist/README.md
for more information.
If you built an older version of the agent, you may have the error
make: *** No targets specified and no makefile found. Stop.
. To solve the issue, you should removeCMakeCache.txt
fromrtloader
folder withrm rtloader/CMakeCache.txt
.Please note that the trace agent needs to be built and run separately.
Please refer to the Agent Developer Guide for more details. For instructions on setting up a windows dev environment, refer to Windows Dev Env.
Testing
Run unit tests using invoke test
.
invoke test --targets=./pkg/aggregator
You can also use invoke linter.go
to run just the go linters.
invoke linter.go
When testing code that depends on rtloader, build and install it first.
invoke rtloader.make && invoke rtloader.install
invoke test --targets=./pkg/collector/python
Run
You can run the agent with:
./bin/agent/agent run -c bin/agent/dist/datadog.yaml
The file bin/agent/dist/datadog.yaml
is copied from dev/dist/datadog.yaml
by invoke agent.build
and must contain a valid api key.
Run a JMX check
In order to run a JMX based check locally, you must have:
- A copy of a JMXFetch
jar
copied todev/dist/jmx/jmxfetch.jar
java
available on your$PATH
For detailed instructions, see JMX checks
Contributing code
You'll find information and help on how to contribute code to this project under
the docs/dev
directory of the present repo.
License
The Datadog agent user space components are licensed under the Apache License, Version 2.0. The BPF code is licensed under the General Public License, Version 2.0.
Top Related Projects
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
The Prometheus monitoring system and time series database.
:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash
Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.
Architected for speed. Automated for easy. Monitoring and troubleshooting, transformed!
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot