Convert Figma logo to code with AI

elastic logologstash

Logstash - transport and process your logs, events, or other data

14,169
3,491
14,169
2,181

Top Related Projects

23,307

Like Prometheus, but for logs.

12,814

Fluentd: Unified Logging Layer (project under CNCF)

4,733

Apache NiFi

14,466

Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.

Quick Overview

Logstash is an open-source data collection engine with real-time pipelining capabilities. It can dynamically unify data from disparate sources and normalize it into destinations of your choice. Logstash is part of the Elastic Stack, alongside Elasticsearch and Kibana.

Pros

  • Highly flexible and extensible with a large ecosystem of plugins
  • Supports a wide variety of input sources, filters, and output destinations
  • Powerful data transformation capabilities with built-in filters
  • Seamless integration with other Elastic Stack components

Cons

  • Can be resource-intensive, especially for high-volume data processing
  • Configuration can be complex for advanced use cases
  • Learning curve can be steep for newcomers to the Elastic Stack
  • Performance may degrade with complex pipelines or large amounts of data

Code Examples

  1. Basic pipeline configuration:
input {
  file {
    path => "/var/log/apache/access.log"
    start_position => "beginning"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "apache_logs-%{+YYYY.MM.dd}"
  }
}

This example configures Logstash to read Apache access logs, parse them using Grok, and index them in Elasticsearch.

  1. Using multiple inputs:
input {
  beats {
    port => 5044
  }
  tcp {
    port => 5000
  }
}

This configuration sets up Logstash to receive data from Filebeat and a TCP input simultaneously.

  1. Conditional output:
output {
  if [loglevel] == "ERROR" {
    email {
      to => "admin@example.com"
      subject => "Error log received"
      body => "An error log was received: %{message}"
    }
  }
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

This example sends an email alert for error logs while indexing all logs in Elasticsearch.

Getting Started

  1. Install Logstash:

    wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
    sudo apt-get update && sudo apt-get install logstash
    
  2. Create a simple configuration file (e.g., logstash.conf):

    input { stdin { } }
    output {
      elasticsearch { hosts => ["localhost:9200"] }
      stdout { codec => rubydebug }
    }
    
  3. Run Logstash:

    bin/logstash -f logstash.conf
    

This setup allows you to input data via stdin, which Logstash will then index in Elasticsearch and print to stdout.

Competitor Comparisons

23,307

Like Prometheus, but for logs.

Pros of Loki

  • Designed for high-volume log storage and querying, with efficient indexing
  • Integrates seamlessly with other Grafana ecosystem tools
  • Supports multi-tenancy out of the box

Cons of Loki

  • Limited parsing and transformation capabilities compared to Logstash
  • Fewer input and output plugins available
  • Steeper learning curve for users familiar with traditional logging systems

Code Comparison

Logstash configuration example:

input {
  file {
    path => "/var/log/syslog"
    type => "syslog"
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGLINE}" }
  }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

Loki configuration example:

auth_enabled: false

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 5m
  chunk_retain_period: 30s

Both Logstash and Loki serve as log aggregation and processing tools, but they have different strengths and use cases. Logstash offers more flexibility in data transformation and supports a wide range of input and output plugins. Loki, on the other hand, focuses on efficient log storage and querying, making it well-suited for high-volume environments and seamless integration with Grafana dashboards.

12,814

Fluentd: Unified Logging Layer (project under CNCF)

Pros of Fluentd

  • Lightweight and more resource-efficient
  • Better performance for high-volume log processing
  • Extensive plugin ecosystem with over 500 community-contributed plugins

Cons of Fluentd

  • Steeper learning curve for configuration
  • Less out-of-the-box functionality compared to Logstash
  • Limited built-in data transformation capabilities

Code Comparison

Fluentd configuration:

<source>
  @type tail
  path /var/log/httpd-access.log
  tag apache.access
  <parse>
    @type apache2
  </parse>
</source>

Logstash configuration:

input {
  file {
    path => "/var/log/httpd-access.log"
    start_position => "beginning"
  }
}
filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
}

Both Fluentd and Logstash are popular log collection and processing tools. Fluentd excels in performance and resource efficiency, making it suitable for high-volume environments. It offers a vast plugin ecosystem but requires more effort to configure. Logstash, on the other hand, provides more out-of-the-box functionality and easier configuration, but may consume more resources. The choice between the two depends on specific use cases, performance requirements, and existing infrastructure.

4,733

Apache NiFi

Pros of NiFi

  • More comprehensive data flow management with a visual interface for designing complex workflows
  • Supports a wider range of data sources and destinations out-of-the-box
  • Better suited for real-time data processing and streaming analytics

Cons of NiFi

  • Steeper learning curve due to its more complex architecture and features
  • Requires more system resources to run effectively, especially for large-scale deployments
  • Less tightly integrated with the Elastic Stack ecosystem

Code Comparison

NiFi uses a Java-based approach for custom processors:

@Tags({"example", "processor"})
@CapabilityDescription("Example processor for NiFi")
public class ExampleProcessor extends AbstractProcessor {
    @Override
    public void onTrigger(ProcessContext context, ProcessSession session) throws ProcessException {
        // Custom processing logic here
    }
}

Logstash uses Ruby-based DSL for custom filters:

filter {
  ruby {
    code => "
      event.set('example_field', 'custom value')
      # Custom processing logic here
    "
  }
}

Both NiFi and Logstash are powerful data processing tools, but they cater to different use cases and complexity levels. NiFi excels in complex data flow management, while Logstash is more focused on log processing and integration with the Elastic Stack.

14,466

Agent for collecting, processing, aggregating, and writing metrics, logs, and other arbitrary data.

Pros of Telegraf

  • Lightweight and efficient, with lower resource usage
  • Supports a wider range of input plugins and data sources
  • Native integration with InfluxDB and other time-series databases

Cons of Telegraf

  • Less flexible for complex data transformations
  • Smaller community and ecosystem compared to Logstash
  • Limited output options compared to Logstash's versatility

Code Comparison

Telegraf configuration (TOML):

[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false

Logstash configuration (Ruby-like DSL):

input {
  file {
    path => "/var/log/syslog"
    type => "syslog"
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGLINE}" }
  }
}

Telegraf focuses on a declarative configuration style using TOML, while Logstash uses a Ruby-like DSL for its pipeline configuration. Telegraf's configuration is typically more concise and straightforward, especially for simple data collection tasks. Logstash's configuration allows for more complex data processing and transformation pipelines, with a wider range of filter plugins available.

Both tools are popular choices for data collection and processing, with Telegraf excelling in lightweight metrics collection and Logstash offering more flexibility for log processing and complex data transformations.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Logstash

Logstash is part of the Elastic Stack along with Beats, Elasticsearch and Kibana. Logstash is a server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite "stash." (Ours is Elasticsearch, naturally.). Logstash has over 200 plugins, and you can write your own very easily as well.

For more info, see https://www.elastic.co/products/logstash

Documentation and Getting Started

You can find the documentation and getting started guides for Logstash on the elastic.co site

For information about building the documentation, see the README in https://github.com/elastic/docs

Downloads

You can download officially released Logstash binaries, as well as debian/rpm packages for the supported platforms, from downloads page.

Need Help?

Logstash Plugins

Logstash plugins are hosted in separate repositories under the logstash-plugins github organization. Each plugin is a self-contained Ruby gem which gets published to RubyGems.org.

Writing your own Plugin

Logstash is known for its extensibility. There are hundreds of plugins for Logstash and you can write your own very easily! For more info on developing and testing these plugins, please see the working with plugins section

Plugin Issues and Pull Requests

Please open new issues and pull requests for plugins under its own repository

For example, if you have to report an issue/enhancement for the Elasticsearch output, please do so here.

Logstash core will continue to exist under this repository and all related issues and pull requests can be submitted here.

Developing Logstash Core

Prerequisites

  • Install JDK version 11 or 17. Make sure to set the JAVA_HOME environment variable to the path to your JDK installation directory. For example set JAVA_HOME=<JDK_PATH>
  • Install JRuby 9.2.x It is recommended to use a Ruby version manager such as RVM or rbenv.
  • Install rake and bundler tool using gem install rake and gem install bundler respectively.

RVM install (optional)

If you prefer to use rvm (ruby version manager) to manage Ruby versions on your machine, follow these directions. In the Logstash folder:

gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
\curl -sSL https://get.rvm.io | bash -s stable --ruby=$(cat .ruby-version)

Check Ruby version

Before you proceed, please check your ruby version by:

$ ruby -v

The printed version should be the same as in the .ruby-version file.

Building Logstash

The Logstash project includes the source code for all of Logstash, including the Elastic-Licensed X-Pack features and functions; to run Logstash from source using only the OSS-licensed code, export the OSS environment variable with a value of true:

export OSS=true
  • Set up the location of the source code to build
export LOGSTASH_SOURCE=1
export LOGSTASH_PATH=/YOUR/LOGSTASH/DIRECTORY

Install dependencies with gradle (recommended)1

  • Install development dependencies
./gradlew installDevelopmentGems
  • Install default plugins and other dependencies
./gradlew installDefaultGems

Verify the installation

To verify your environment, run the following to start Logstash and send your first event:

bin/logstash -e 'input { stdin { } } output { stdout {} }'

This should start Logstash with stdin input waiting for you to enter an event

hello world
2016-11-11T01:22:14.405+0000 0.0.0.0 hello world

Advanced: Drip Launcher

Drip is a tool that solves the slow JVM startup problem while developing Logstash. The drip script is intended to be a drop-in replacement for the java command. We recommend using drip during development, in particular for running tests. Using drip, the first invocation of a command will not be faster but the subsequent commands will be swift.

To tell logstash to use drip, set the environment variable JAVACMD=`which drip`.

Example (but see the Testing section below before running rspec for the first time):

JAVACMD=`which drip` bin/rspec

Caveats

Drip does not work with STDIN. You cannot use drip for running configs which use the stdin plugin.

Building Logstash Documentation

To build the Logstash Reference (open source content only) on your local machine, clone the following repos:

logstash - contains main docs about core features

logstash-docs - contains generated plugin docs

docs - contains doc build files

Make sure you have the same branch checked out in logstash and logstash-docs. Check out master in the docs repo.

Run the doc build script from within the docs repo. For example:

./build_docs.pl --doc ../logstash/docs/index.asciidoc --chunk=1 -open

Testing

Most of the unit tests in Logstash are written using rspec for the Ruby parts. For the Java parts, we use junit. For testing you can use the test rake tasks and the bin/rspec command, see instructions below:

Core tests

1- To run the core tests you can use the Gradle task:

./gradlew test

or use the rspec tool to run all tests or run a specific test:

bin/rspec
bin/rspec spec/foo/bar_spec.rb

Note that before running the rspec command for the first time you need to set up the RSpec test dependencies by running:

./gradlew bootstrap

2- To run the subset of tests covering the Java codebase only run:

./gradlew javaTests

3- To execute the complete test-suite including the integration tests run:

./gradlew check

4- To execute a single Ruby test run:

SPEC_OPTS="-fd -P logstash-core/spec/logstash/api/commands/default_metadata_spec.rb" ./gradlew :logstash-core:rubyTests --tests org.logstash.RSpecTests

5- To execute single spec for integration test, run:

./gradlew integrationTests -PrubyIntegrationSpecs=specs/slowlog_spec.rb

Sometimes you might find a change to a piece of Logstash code causes a test to hang. These can be hard to debug.

If you set LS_JAVA_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005" you can connect to a running Logstash with your IDEs debugger which can be a great way of finding the issue.

Plugins tests

To run the tests of all currently installed plugins:

rake test:plugins

You can install the default set of plugins included in the logstash package:

rake test:install-default

Note that if a plugin is installed using the plugin manager bin/logstash-plugin install ... do not forget to also install the plugins development dependencies using the following command after the plugin installation:

bin/logstash-plugin install --development

Building Artifacts

Built artifacts will be placed in the LS_HOME/build directory, and will create the directory if it is not already present.

You can build a Logstash snapshot package as tarball or zip file

./gradlew assembleTarDistribution
./gradlew assembleZipDistribution

OSS-only artifacts can similarly be built with their own gradle tasks:

./gradlew assembleOssTarDistribution
./gradlew assembleOssZipDistribution

You can also build .rpm and .deb, but the fpm tool is required.

rake artifact:rpm
rake artifact:deb

and:

rake artifact:rpm_oss
rake artifact:deb_oss

Using a Custom JRuby Distribution

If you want the build to use a custom JRuby you can do so by setting a path to a custom JRuby distribution's source root via the custom.jruby.path Gradle property.

E.g.

./gradlew clean test -Pcustom.jruby.path="/path/to/jruby"

Project Principles

  • Community: If a newbie has a bad time, it's a bug.
  • Software: Make it work, then make it right, then make it fast.
  • Technology: If it doesn't do a thing today, we can make it do it tomorrow.

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

Footnotes

Footnotes

  1. Use bundle instead of gradle to install dependencies

    Alternatively, instead of using gradle you can also use bundle:

    • Install development dependencies

      bundle config set --local path vendor/bundle
      bundle install
      
    • Bootstrap the environment:

      rake bootstrap
      
    • You can then use bin/logstash to start Logstash, but there are no plugins installed. To install default plugins, you can run:

      rake plugin:install-default
      

    This will install the 80+ default plugins which makes Logstash ready to connect to multiple data sources, perform transformations and send the results to Elasticsearch and other destinations.