Convert Figma logo to code with AI

prometheus logonode_exporter

Exporter for machine metrics

10,934
2,329
10,934
239

Top Related Projects

1,586

Vendor-neutral programmable observability pipelines.

14,466

Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.

70,358

Architected for speed. Automated for easy. Monitoring and troubleshooting, transformed!

Prometheus exporter for custom eBPF metrics

Prometheus exporter that mines /proc to report on selected processes

Prometheus exporter for Windows machines

Quick Overview

Node Exporter is an open-source project that exports hardware and OS metrics exposed by *NIX kernels as Prometheus metrics. It's designed to run on the host system and provide detailed information about the machine's resources, making it an essential tool for monitoring and observability in Prometheus-based systems.

Pros

  • Comprehensive metrics collection for various system aspects (CPU, memory, disk, network, etc.)
  • Highly extensible with a wide range of collectors that can be enabled or disabled
  • Lightweight and efficient, with minimal impact on system resources
  • Well-integrated with the Prometheus ecosystem and supported by many dashboarding tools

Cons

  • Limited to *NIX systems, not suitable for Windows environments without additional tools
  • Some collectors may require root privileges, which can be a security concern
  • Configuration can be complex for advanced use cases
  • May require additional setup for cloud-native or containerized environments

Getting Started

To get started with Node Exporter:

  1. Download the latest release from the GitHub releases page.
  2. Extract the archive:
    tar xvfz node_exporter-*.*-amd64.tar.gz
    cd node_exporter-*.*-amd64
    
  3. Run the Node Exporter:
    ./node_exporter
    
  4. Node Exporter will start and listen on http://localhost:9100 by default.
  5. Configure Prometheus to scrape metrics from Node Exporter by adding the following to your prometheus.yml:
    scrape_configs:
      - job_name: 'node'
        static_configs:
          - targets: ['localhost:9100']
    

For production use, consider running Node Exporter as a system service and configuring appropriate security measures.

Competitor Comparisons

1,586

Vendor-neutral programmable observability pipelines.

Pros of Grafana Agent

  • More comprehensive data collection capabilities, including logs and traces
  • Supports remote write to various backends, not just Prometheus
  • Easier configuration and management through a single agent

Cons of Grafana Agent

  • Higher resource usage due to additional features
  • Steeper learning curve for users familiar with simpler Node Exporter
  • Less mature project with potentially fewer community contributions

Code Comparison

Node Exporter configuration:

scrape_configs:
  - job_name: 'node'
    static_configs:
      - targets: ['localhost:9100']

Grafana Agent configuration:

metrics:
  wal_directory: /tmp/wal
  global:
    scrape_interval: 15s
  configs:
    - name: default
      scrape_configs:
        - job_name: node
          static_configs:
            - targets: ['localhost:9100']

The Grafana Agent configuration is more verbose but offers greater flexibility for managing multiple data types and remote write destinations. Node Exporter's configuration is simpler and focuses solely on metrics collection, making it easier to set up for basic use cases.

14,466

Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.

Pros of Telegraf

  • Multi-platform support: Telegraf can collect metrics from various systems and send them to multiple output destinations, not limited to InfluxDB
  • Extensive plugin ecosystem: Offers a wide range of input, output, aggregator, and processor plugins
  • Flexible configuration: Allows for complex data processing and aggregation before sending metrics

Cons of Telegraf

  • Higher resource usage: Generally consumes more system resources compared to Node Exporter
  • Steeper learning curve: More complex configuration and setup process due to its extensive features

Code Comparison

Node Exporter (Go):

func (c *cpuCollector) Update(ch chan<- prometheus.Metric) error {
    stats, err := cpu.Times(true)
    if err != nil {
        return err
    }
    // ... (metric collection logic)
}

Telegraf (Go):

func (c *CPUStats) Gather(acc telegraf.Accumulator) error {
    times, err := c.ps.CPUTimes(c.PerCPU, c.TotalCPU)
    if err != nil {
        return err
    }
    // ... (metric collection logic)
}

Both projects use Go for implementation, with similar approaches to metric collection. The main difference lies in how metrics are handled and exported, reflecting the broader scope of Telegraf compared to Node Exporter's focus on Prometheus-specific exports.

70,358

Architected for speed. Automated for easy. Monitoring and troubleshooting, transformed!

Pros of Netdata

  • Real-time, high-resolution metrics collection and visualization
  • Easy installation and auto-configuration
  • Built-in web dashboard with interactive charts

Cons of Netdata

  • Higher resource usage due to more frequent data collection
  • Less flexible for custom metric collection compared to Node Exporter

Code Comparison

Node Exporter (Go):

func (c *cpuCollector) Update(ch chan<- prometheus.Metric) error {
    times, err := cpu.Times(false)
    if err != nil {
        return err
    }
    for _, t := range times {
        ch <- prometheus.MustNewConstMetric(c.cpu, prometheus.CounterValue, t.User, t.CPU)
    }
    return nil
}

Netdata (C):

static void cpu_chart(int update_every) {
    static RRDSET *st = NULL;
    static RRDDIM *rd_user = NULL, *rd_nice = NULL, *rd_system = NULL, *rd_idle = NULL;

    if(unlikely(!st)) {
        st = rrdset_create_localhost("cpu", "cpu", NULL, "cpu", "CPU Usage", "percentage", 100, update_every, RRDSET_TYPE_STACKED);
        rd_user   = rrddim_add(st, "user", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL);
        rd_nice   = rrddim_add(st, "nice", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL);
        rd_system = rrddim_add(st, "system", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL);
        rd_idle   = rrddim_add(st, "idle", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL);
    }
}

Both projects offer robust system monitoring capabilities, but Netdata focuses on real-time visualization and ease of use, while Node Exporter provides more flexibility for custom metric collection and integration with Prometheus ecosystems.

Prometheus exporter for custom eBPF metrics

Pros of ebpf_exporter

  • Provides deeper kernel-level insights using eBPF technology
  • Allows for custom, fine-grained metrics collection
  • Lower overhead compared to traditional monitoring methods

Cons of ebpf_exporter

  • Requires more setup and configuration
  • Limited to Linux systems with eBPF support
  • Steeper learning curve for users unfamiliar with eBPF

Code Comparison

node_exporter:

func (c *cpuCollector) Update(ch chan<- prometheus.Metric) error {
    stats, err := cpu.Get()
    if err != nil {
        return err
    }
    // ... metric collection logic
}

ebpf_exporter:

func (e *Exporter) Update(ch chan<- prometheus.Metric) error {
    for _, config := range e.config.Metrics {
        if err := e.updateBPFMetric(ch, config); err != nil {
            return err
        }
    }
    return nil
}

The code snippets show that node_exporter directly collects system metrics, while ebpf_exporter uses eBPF programs to gather more detailed, customizable metrics from the kernel.

Both exporters are valuable tools for monitoring, with node_exporter offering broader system coverage and easier setup, while ebpf_exporter provides deeper, more customizable insights at the cost of increased complexity.

Prometheus exporter that mines /proc to report on selected processes

Pros of process-exporter

  • Focuses specifically on process-level metrics, providing more detailed information about individual processes
  • Allows for custom grouping and naming of processes based on configurable rules
  • Offers more granular control over which processes to monitor and how to aggregate their metrics

Cons of process-exporter

  • Limited to process-specific metrics, lacking broader system-level information provided by node_exporter
  • Requires more configuration and setup compared to the out-of-the-box functionality of node_exporter
  • May have higher resource usage when monitoring a large number of processes

Code Comparison

node_exporter:

func (c *cpuCollector) Update(ch chan<- prometheus.Metric) error {
    stats, err := cpu.Get()
    if err != nil {
        return err
    }
    // ... metric collection logic
}

process-exporter:

func (p *Proc) GetMetrics() (Metrics, error) {
    if p.Zombie {
        return Metrics{}, nil
    }
    // ... process-specific metric collection logic
}

Both projects use Go and follow similar patterns for metric collection, but process-exporter focuses on individual process metrics while node_exporter covers a broader range of system metrics.

Prometheus exporter for Windows machines

Pros of windows_exporter

  • Specifically designed for Windows systems, offering comprehensive Windows-specific metrics
  • Supports a wide range of Windows-specific collectors (e.g., IIS, MSMQ, Exchange)
  • Active development with frequent updates tailored to Windows environments

Cons of windows_exporter

  • Limited to Windows operating systems, lacking cross-platform support
  • May require additional configuration for certain Windows-specific features
  • Smaller community compared to node_exporter, potentially resulting in fewer resources and third-party integrations

Code Comparison

node_exporter:

func (c *systemCollector) Update(ch chan<- prometheus.Metric) error {
    return nil
}

windows_exporter:

func (c *SystemCollector) Collect(ch chan<- prometheus.Metric) error {
    return nil
}

Both exporters use similar collector structures, but windows_exporter focuses on Windows-specific implementations. The code snippets show the basic structure of collector functions, with slight differences in naming conventions and parameter types.

node_exporter is a versatile, cross-platform exporter for various Unix-like systems, while windows_exporter specializes in Windows environments. Choose based on your target operating system and specific monitoring requirements.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Node exporter

CircleCI Buildkite status Docker Repository on Quay Docker Pulls Go Report Card

Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.

The Windows exporter is recommended for Windows users. To expose NVIDIA GPU metrics, prometheus-dcgm can be used.

Installation and Usage

If you are new to Prometheus and node_exporter there is a simple step-by-step guide.

The node_exporter listens on HTTP port 9100 by default. See the --help output for more options.

Ansible

For automated installs with Ansible, there is the Prometheus Community role.

Docker

The node_exporter is designed to monitor the host system. Deploying in containers requires extra care in order to avoid monitoring the container itself.

For situations where containerized deployment is needed, some extra flags must be used to allow the node_exporter access to the host namespaces.

Be aware that any non-root mount points you want to monitor will need to be bind-mounted into the container.

If you start container for host monitoring, specify path.rootfs argument. This argument must match path in bind-mount of host root. The node_exporter will use path.rootfs as prefix to access host filesystem.

docker run -d \
  --net="host" \
  --pid="host" \
  -v "/:/host:ro,rslave" \
  quay.io/prometheus/node-exporter:latest \
  --path.rootfs=/host

For Docker compose, similar flag changes are needed.

---
version: '3.8'

services:
  node_exporter:
    image: quay.io/prometheus/node-exporter:latest
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    network_mode: host
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

On some systems, the timex collector requires an additional Docker flag, --cap-add=SYS_TIME, in order to access the required syscalls.

Collectors

There is varying support for collectors on each operating system. The tables below list all existing collectors and the supported systems.

Collectors are enabled by providing a --collector.<name> flag. Collectors that are enabled by default can be disabled by providing a --no-collector.<name> flag. To enable only some specific collector(s), use --collector.disable-defaults --collector.<name> ....

Include & Exclude flags

A few collectors can be configured to include or exclude certain patterns using dedicated flags. The exclude flags are used to indicate "all except", while the include flags are used to say "none except". Note that these flags are mutually exclusive on collectors that support both.

Example:

--collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)

List:

CollectorScopeInclude FlagExclude Flag
arpdevice--collector.arp.device-include--collector.arp.device-exclude
cpubugs--collector.cpu.info.bugs-includeN/A
cpuflags--collector.cpu.info.flags-includeN/A
diskstatsdevice--collector.diskstats.device-include--collector.diskstats.device-exclude
ethtooldevice--collector.ethtool.device-include--collector.ethtool.device-exclude
ethtoolmetrics--collector.ethtool.metrics-includeN/A
filesystemfs-typesN/A--collector.filesystem.fs-types-exclude
filesystemmount-pointsN/A--collector.filesystem.mount-points-exclude
hwmonchip--collector.hwmon.chip-include--collector.hwmon.chip-exclude
hwmonsensor--collector.hwmon.sensor-include--collector.hwmon.sensor-exclude
interruptsname--collector.interrupts.name-include--collector.interrupts.name-exclude
netdevdevice--collector.netdev.device-include--collector.netdev.device-exclude
qdiskdevice--collector.qdisk.device-include--collector.qdisk.device-exclude
slabinfoslab-names--collector.slabinfo.slabs-include--collector.slabinfo.slabs-exclude
sysctlall--collector.sysctl.includeN/A
systemdunit--collector.systemd.unit-include--collector.systemd.unit-exclude

Enabled by default

NameDescriptionOS
arpExposes ARP statistics from /proc/net/arp.Linux
bcacheExposes bcache statistics from /sys/fs/bcache/.Linux
bondingExposes the number of configured and active slaves of Linux bonding interfaces.Linux
btrfsExposes btrfs statisticsLinux
boottimeExposes system boot time derived from the kern.boottime sysctl.Darwin, Dragonfly, FreeBSD, NetBSD, OpenBSD, Solaris
conntrackShows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present).Linux
cpuExposes CPU statisticsDarwin, Dragonfly, FreeBSD, Linux, Solaris, OpenBSD
cpufreqExposes CPU frequency statisticsLinux, Solaris
diskstatsExposes disk I/O statistics.Darwin, Linux, OpenBSD
dmiExpose Desktop Management Interface (DMI) info from /sys/class/dmi/id/Linux
edacExposes error detection and correction statistics.Linux
entropyExposes available entropy.Linux
execExposes execution statistics.Dragonfly, FreeBSD
fibrechannelExposes fibre channel information and statistics from /sys/class/fc_host/.Linux
filefdExposes file descriptor statistics from /proc/sys/fs/file-nr.Linux
filesystemExposes filesystem statistics, such as disk space used.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
hwmonExpose hardware monitoring and sensor data from /sys/class/hwmon/.Linux
infinibandExposes network statistics specific to InfiniBand and Intel OmniPath configurations.Linux
ipvsExposes IPVS status from /proc/net/ip_vs and stats from /proc/net/ip_vs_stats.Linux
loadavgExposes load average.Darwin, Dragonfly, FreeBSD, Linux, NetBSD, OpenBSD, Solaris
mdadmExposes statistics about devices in /proc/mdstat (does nothing if no /proc/mdstat present).Linux
meminfoExposes memory statistics.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
netclassExposes network interface info from /sys/class/net/Linux
netdevExposes network interface statistics such as bytes transferred.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
netisrExposes netisr statisticsFreeBSD
netstatExposes network statistics from /proc/net/netstat. This is the same information as netstat -s.Linux
nfsExposes NFS client statistics from /proc/net/rpc/nfs. This is the same information as nfsstat -c.Linux
nfsdExposes NFS kernel server statistics from /proc/net/rpc/nfsd. This is the same information as nfsstat -s.Linux
nvmeExposes NVMe info from /sys/class/nvme/Linux
osExpose OS release info from /etc/os-release or /usr/lib/os-releaseany
powersupplyclassExposes Power Supply statistics from /sys/class/power_supplyLinux
pressureExposes pressure stall statistics from /proc/pressure/.Linux (kernel 4.20+ and/or CONFIG_PSI)
raplExposes various statistics from /sys/class/powercap.Linux
schedstatExposes task scheduler statistics from /proc/schedstat.Linux
selinuxExposes SELinux statistics.Linux
sockstatExposes various statistics from /proc/net/sockstat.Linux
softnetExposes statistics from /proc/net/softnet_stat.Linux
statExposes various statistics from /proc/stat. This includes boot time, forks and interrupts.Linux
tapestatsExposes statistics from /sys/class/scsi_tape.Linux
textfileExposes statistics read from local disk. The --collector.textfile.directory flag must be set.any
thermalExposes thermal statistics like pmset -g therm.Darwin
thermal_zoneExposes thermal zone & cooling device statistics from /sys/class/thermal.Linux
timeExposes the current system time.any
timexExposes selected adjtimex(2) system call stats.Linux
udp_queuesExposes UDP total lengths of the rx_queue and tx_queue from /proc/net/udp and /proc/net/udp6.Linux
unameExposes system information as provided by the uname system call.Darwin, FreeBSD, Linux, OpenBSD
vmstatExposes statistics from /proc/vmstat.Linux
watchdogExposes statistics from /sys/class/watchdogLinux
xfsExposes XFS runtime statistics.Linux (kernel 4.4+)
zfsExposes ZFS performance statistics.FreeBSD, Linux, Solaris

Disabled by default

node_exporter also implements a number of collectors that are disabled by default. Reasons for this vary by collector, and may include:

  • High cardinality
  • Prolonged runtime that exceeds the Prometheus scrape_interval or scrape_timeout
  • Significant resource demands on the host

You can enable additional collectors as desired by adding them to your init system's or service supervisor's startup configuration for node_exporter but caution is advised. Enable at most one at a time, testing first on a non-production system, then by hand on a single production node. When enabling additional collectors, you should carefully monitor the change by observing the scrape_duration_seconds metric to ensure that collection completes and does not time out. In addition, monitor the scrape_samples_post_metric_relabeling metric to see the changes in cardinality.

NameDescriptionOS
buddyinfoExposes statistics of memory fragments as reported by /proc/buddyinfo.Linux
cgroupsA summary of the number of active and enabled cgroupsLinux
cpu_vulnerabilitiesExposes CPU vulnerability information from sysfs.Linux
devstatExposes device statisticsDragonfly, FreeBSD
drmExpose GPU metrics using sysfs / DRM, amdgpu is the only driver which exposes this information through DRMLinux
drbdExposes Distributed Replicated Block Device statistics (to version 8.4)Linux
ethtoolExposes network interface information and network driver statistics equivalent to ethtool, ethtool -S, and ethtool -i.Linux
interruptsExposes detailed interrupts statistics.Linux, OpenBSD
ksmdExposes kernel and system statistics from /sys/kernel/mm/ksm.Linux
lnstatExposes stats from /proc/net/stat/.Linux
logindExposes session counts from logind.Linux
meminfo_numaExposes memory statistics from /sys/devices/system/node/node[0-9]*/meminfo, /sys/devices/system/node/node[0-9]*/numastat.Linux
mountstatsExposes filesystem statistics from /proc/self/mountstats. Exposes detailed NFS client statistics.Linux
network_routeExposes the routing table as metricsLinux
perfExposes perf based metrics (Warning: Metrics are dependent on kernel configuration and settings).Linux
processesExposes aggregate process statistics from /proc.Linux
qdiscExposes queuing discipline statisticsLinux
slabinfoExposes slab statistics from /proc/slabinfo. Note that permission of /proc/slabinfo is usually 0400, so set it appropriately.Linux
softirqsExposes detailed softirq statistics from /proc/softirqs.Linux
sysctlExpose sysctl values from /proc/sys. Use --collector.sysctl.include(-info) to configure.Linux
systemdExposes service and system status from systemd.Linux
tcpstatExposes TCP connection status information from /proc/net/tcp and /proc/net/tcp6. (Warning: the current version has potential performance issues in high load situations.)Linux
wifiExposes WiFi device and station statistics.Linux
xfrmExposes statistics from /proc/net/xfrm_statLinux
zoneinfoExposes NUMA memory zone metrics.Linux

Deprecated

These collectors are deprecated and will be removed in the next major release.

NameDescriptionOS
ntpExposes local NTP daemon health to check timeany
runitExposes service status from runit.any
supervisordExposes service status from supervisord.any

Perf Collector

The perf collector may not work out of the box on some Linux systems due to kernel configuration and security settings. To allow access, set the following sysctl parameter:

sysctl -w kernel.perf_event_paranoid=X
  • 2 allow only user-space measurements (default since Linux 4.6).
  • 1 allow both kernel and user measurements (default before Linux 4.6).
  • 0 allow access to CPU-specific data but not raw tracepoint samples.
  • -1 no restrictions.

Depending on the configured value different metrics will be available, for most cases 0 will provide the most complete set. For more information see man 2 perf_event_open.

By default, the perf collector will only collect metrics of the CPUs that node_exporter is running on (ie runtime.NumCPU. If this is insufficient (e.g. if you run node_exporter with its CPU affinity set to specific CPUs), you can specify a list of alternate CPUs by using the --collector.perf.cpus flag. For example, to collect metrics on CPUs 2-6, you would specify: --collector.perf --collector.perf.cpus=2-6. The CPU configuration is zero indexed and can also take a stride value; e.g. --collector.perf --collector.perf.cpus=1-10:5 would collect on CPUs 1, 5, and 10.

The perf collector is also able to collect tracepoint counts when using the --collector.perf.tracepoint flag. Tracepoints can be found using perf list or from debugfs. And example usage of this would be --collector.perf.tracepoint="sched:sched_process_exec".

Sysctl Collector

The sysctl collector can be enabled with --collector.sysctl. It supports exposing numeric sysctl values as metrics using the --collector.sysctl.include flag and string values as info metrics by using the --collector.sysctl.include-info flag. The flags can be repeated. For sysctl with multiple numeric values, an optional mapping can be given to expose each value as its own metric. Otherwise an index label is used to identify the different fields.

Examples

Numeric values
Single values

Using --collector.sysctl.include=vm.user_reserve_kbytes: vm.user_reserve_kbytes = 131072 -> node_sysctl_vm_user_reserve_kbytes 131072

Multiple values

A sysctl can contain multiple values, for example:

net.ipv4.tcp_rmem = 4096	131072	6291456

Using --collector.sysctl.include=net.ipv4.tcp_rmem the collector will expose:

node_sysctl_net_ipv4_tcp_rmem{index="0"} 4096
node_sysctl_net_ipv4_tcp_rmem{index="1"} 131072
node_sysctl_net_ipv4_tcp_rmem{index="2"} 6291456

If the indexes have defined meaning like in this case, the values can be mapped to multiple metrics by appending the mapping to the --collector.sysctl.include flag: Using --collector.sysctl.include=net.ipv4.tcp_rmem:min,default,max the collector will expose:

node_sysctl_net_ipv4_tcp_rmem_min 4096
node_sysctl_net_ipv4_tcp_rmem_default 131072
node_sysctl_net_ipv4_tcp_rmem_max 6291456
String values

String values need to be exposed as info metric. The user selects them by using the --collector.sysctl.include-info flag.

Single values

kernel.core_pattern = core -> node_sysctl_info{key="kernel.core_pattern_info", value="core"} 1

Multiple values

Given the following sysctl:

kernel.seccomp.actions_avail = kill_process kill_thread trap errno trace log allow

Setting --collector.sysctl.include-info=kernel.seccomp.actions_avail will yield:

node_sysctl_info{key="kernel.seccomp.actions_avail", index="0", value="kill_process"} 1
node_sysctl_info{key="kernel.seccomp.actions_avail", index="1", value="kill_thread"} 1
...

Textfile Collector

The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine.

To use it, set the --collector.textfile.directory flag on the node_exporter commandline. The collector will parse all files in that directory matching the glob *.prom using the text format. Note: Timestamps are not supported.

To atomically push completion time for a cron job:

echo my_batch_job_completion_time $(date +%s) > /path/to/directory/my_batch_job.prom.$$
mv /path/to/directory/my_batch_job.prom.$$ /path/to/directory/my_batch_job.prom

To statically set roles for a machine using labels:

echo 'role{role="application_server"} 1' > /path/to/directory/role.prom.$$
mv /path/to/directory/role.prom.$$ /path/to/directory/role.prom

Filtering enabled collectors

The node_exporter will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.

For advanced use the node_exporter can be passed an optional list of collectors to filter metrics. The collect[] parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.

  params:
    collect[]:
      - foo
      - bar

This can be useful for having different Prometheus servers collect specific metrics from nodes.

Development building and running

Prerequisites:

Building:

git clone https://github.com/prometheus/node_exporter.git
cd node_exporter
make build
./node_exporter <flags>

To see all available configuration flags:

./node_exporter -h

Running tests

make test

TLS endpoint

EXPERIMENTAL

The exporter supports TLS via a new web configuration file.

./node_exporter --web.config.file=web-config.yml

See the exporter-toolkit web-configuration for more details.