Convert Figma logo to code with AI

spotify logohelios

Docker container orchestration platform

2,107
233
2,107
17

Top Related Projects

18,244

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

39,846

Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

19,133

Prefect is a workflow orchestration framework for building resilient data pipelines in Python.

13,049

An orchestration platform for the development, production, and observation of data assets.

5,270

Apache NiFi

Quick Overview

Helios is a Docker orchestration platform developed by Spotify. It allows users to deploy and manage Docker containers across a cluster of hosts, providing tools for service discovery, health checking, and rolling updates. Helios aims to simplify the process of deploying and managing distributed applications in containerized environments.

Pros

  • Seamless integration with Docker, leveraging its containerization benefits
  • Built-in service discovery and health checking capabilities
  • Supports rolling updates for zero-downtime deployments
  • Provides a RESTful API for easy integration with other tools and systems

Cons

  • Limited active development and updates in recent years
  • Smaller community compared to more popular container orchestration platforms like Kubernetes
  • Primarily designed for Spotify's internal use, which may limit its adaptability for other organizations
  • Steeper learning curve for users not familiar with Spotify's infrastructure

Code Examples

// Creating a Helios client
HeliosClient client = HeliosClient.newBuilder()
    .setEndpoints("http://helios-master:5801")
    .build();
// Deploying a job
JobId jobId = JobId.newBuilder()
    .setName("my-service")
    .setVersion("1.0")
    .build();

Job job = Job.newBuilder()
    .setName(jobId.getName())
    .setVersion(jobId.getVersion())
    .setImage("myregistry/myimage:1.0")
    .addPort("http", PortMapping.of(8080))
    .build();

client.createJob(job).get();
// Deploying a job to a host
String host = "myhost.example.com";
Deployment deployment = Deployment.of(jobId, Goal.START);
client.deploy(deployment, host).get();

Getting Started

To get started with Helios, follow these steps:

  1. Set up a Helios master and agent nodes
  2. Install the Helios CLI tool
  3. Configure your Docker images
  4. Create a Helios job definition
  5. Deploy the job using the Helios CLI or API

Example CLI commands:

# Create a job
helios create my-job:v1 myregistry/myimage:1.0

# Deploy the job
helios deploy my-job:v1 myhost.example.com

# Check job status
helios status my-job:v1

For more detailed instructions, refer to the Helios documentation on GitHub.

Competitor Comparisons

18,244

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Pros of Luigi

  • More versatile and widely applicable for general data processing workflows
  • Larger community and more active development
  • Supports a wider range of task types and data sources

Cons of Luigi

  • Steeper learning curve due to more complex architecture
  • Requires more setup and configuration for simple tasks
  • Can be overkill for smaller projects or simpler workflows

Code Comparison

Luigi:

import luigi

class MyTask(luigi.Task):
    def requires(self):
        return SomeOtherTask()

    def run(self):
        # Task logic here

Helios:

@Singleton
public class MyService {
    @Inject
    public MyService(HeliosClient helios) {
        // Service logic here
    }
}

Summary

Luigi is a more comprehensive data processing framework, offering greater flexibility and a wider range of features. It's better suited for complex data pipelines and large-scale projects. Helios, on the other hand, is more focused on service deployment and management, making it simpler to use for specific containerized application deployment scenarios. Luigi has a larger community and more active development, but this comes with a steeper learning curve. The choice between the two depends on the specific needs of your project: data processing workflows (Luigi) vs. containerized service deployment (Helios).

39,846

Apache Airflow - A platform to programmatically author, schedule, and monitor workflows

Pros of Airflow

  • More comprehensive workflow management system, supporting complex DAGs and dependencies
  • Larger community and ecosystem, with extensive plugins and integrations
  • Better suited for data pipeline orchestration and ETL processes

Cons of Airflow

  • Steeper learning curve and more complex setup compared to Helios
  • Heavier resource requirements, potentially overkill for simpler deployment scenarios
  • Less focused on container orchestration and service discovery

Code Comparison

Helios (Docker container deployment):

job = {
    "name": "myapp",
    "version": "1.0",
    "image": "myapp:1.0",
    "ports": [{"port": 8080, "protocol": "tcp"}]
}

Airflow (DAG definition):

from airflow import DAG
from airflow.operators.python_operator import PythonOperator

dag = DAG('my_dag', schedule_interval='@daily')
task = PythonOperator(
    task_id='my_task',
    python_callable=my_function,
    dag=dag
)

Helios focuses on simple container deployment, while Airflow provides a more comprehensive workflow management system. Helios is better suited for lightweight container orchestration, whereas Airflow excels in complex data pipeline orchestration and scheduling.

19,133

Prefect is a workflow orchestration framework for building resilient data pipelines in Python.

Pros of Prefect

  • More comprehensive workflow management system with advanced features like scheduling, retries, and distributed execution
  • Active development and larger community support, with frequent updates and contributions
  • Extensive documentation and tutorials for easier onboarding and usage

Cons of Prefect

  • Steeper learning curve due to more complex architecture and concepts
  • Potentially overkill for simpler deployment scenarios or smaller projects
  • Requires more setup and configuration compared to Helios

Code Comparison

Helios deployment example:

helios:
  image: spotify/helios-solo:latest
  ports:
    - "5801:5801"

Prefect flow example:

from prefect import task, Flow

@task
def hello_task():
    print("Hello, Prefect!")

with Flow("My First Flow") as flow:
    hello_task()

While Helios focuses on container deployment and management, Prefect offers a more comprehensive workflow orchestration solution. Helios is simpler to set up for basic container deployments, but Prefect provides more advanced features for complex data pipelines and task dependencies. The choice between the two depends on the specific needs of your project and the level of workflow management required.

13,049

An orchestration platform for the development, production, and observation of data assets.

Pros of Dagster

  • More comprehensive data orchestration platform with broader workflow management capabilities
  • Larger and more active community, with frequent updates and extensive documentation
  • Supports multiple programming languages and integrates with various data tools

Cons of Dagster

  • Steeper learning curve due to its extensive features and concepts
  • Potentially overkill for simpler deployment scenarios or smaller projects
  • Requires more setup and configuration compared to Helios

Code Comparison

Helios (Java):

@Profile("prod")
@Configuration
public class ProdConfig {
    @Bean
    public DataSource dataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .build();
    }
}

Dagster (Python):

@job
def my_etl_job():
    raw_data = extract_data()
    transformed_data = transform_data(raw_data)
    load_data(transformed_data)

@op
def extract_data():
    # Extract data logic here
    pass

The code snippets showcase different approaches: Helios focuses on Java-based configuration for deployment, while Dagster emphasizes Python-based workflow definition and data pipeline orchestration.

5,270

Apache NiFi

Pros of NiFi

  • More comprehensive data flow management system, handling a wider range of data processing tasks
  • Larger and more active community, with frequent updates and extensive documentation
  • Provides a user-friendly web-based interface for designing and monitoring data flows

Cons of NiFi

  • Steeper learning curve due to its more complex architecture and extensive feature set
  • Requires more system resources to run effectively, especially for large-scale deployments
  • Less focused on container orchestration compared to Helios

Code Comparison

Helios (Docker container deployment):

registries:
  dockerhub: https://index.docker.io/v1/
jobs:
  myjob:
    image: myapp:1.0
    ports:
      - 8080

NiFi (Processor configuration):

<processor>
  <id>abc123</id>
  <name>GetFile</name>
  <properties>
    <entry>
      <key>Input Directory</key>
      <value>/path/to/input</value>
    </entry>
  </properties>
</processor>

While Helios focuses on Docker container deployment and management, NiFi is designed for creating and managing complex data flows. The code examples highlight this difference, with Helios using YAML for container configuration and NiFi using XML for processor setup within its data flow system.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Helios Circle CI Slack Status Download

Status: Sunset

This project was created when there were no open source container orchestration frameworks. Since the advent of Kubernetes and other tools, we've stopped using helios at Spotify and have now switched to other tools like Kubernetes. This project will no longer accept PRs.

Helios is a Docker orchestration platform for deploying and managing containers across an entire fleet of servers. Helios provides a HTTP API as well as a command-line client to interact with servers running your containers. It also keeps a history of events in your cluster including information such as deploys, restarts and version changes.

Usage Example

# Create an nginx job using the nginx container image, exposing it on the host on port 8080
$ helios create nginx:v1 nginx:1.7.1 -p http=80:8080

# Check that the job is listed
$ helios jobs

# List helios hosts
$ helios hosts

# Deploy the nginx job on one of the hosts
$ helios deploy nginx:v1 <host>

# Check the job status
$ helios status

# Curl the nginx container when it's started running
$ curl <host>:8080

# Undeploy the nginx job
$ helios undeploy -a nginx:v1

# Remove the nginx job
$ helios remove nginx:v1

Getting Started

If you're looking for how to use Helios, see the docs directory. Most probably the User Manual is what you're looking for.

If you're looking for how to download, build, install and run Helios, keep reading.

Prerequisites

The binary release of Helios is built for Ubuntu 14.04.1 LTS, but Helios should be buildable on any platform with at least Java 8 and a recent Maven 3 available.

Other components that are required for a helios installation are:

Install & Run

Quick start for local usage

Use helios-solo to launch a local environment with a Helios master and agent.

First, ensure you have Docker installed locally. Test this by making sure docker info works. Then install helios-solo:

# add the helios apt repository
$ sudo apt-key adv --keyserver hkp://keys.gnupg.net:80 --recv-keys 6F75C6183FF5E93D
$ echo "deb https://dl.bintray.com/spotify/deb trusty main" | sudo tee -a /etc/apt/sources.list.d/helios.list

# install helios-solo on Debian/Ubuntu
$ sudo apt-get update && sudo apt-get install helios-solo

# install helios-solo on OS X
$ brew tap spotify/public && brew install helios-solo

Once you've got it installed, bring up the helios-solo cluster:

# launch a helios cluster in a Docker container
$ helios-up

# check if it worked and the solo agent is registered
$ helios-solo hosts

You can now use helios-solo as your local Helios cluster. If you have issues, see the detailed helios-solo documentation.

Production on Debian, Ubuntu, etc.

Prebuilt Debian packages are available for production use. To install:

# add the helios apt repository
$ sudo apt-key adv --keyserver hkp://keys.gnupg.net:80 --recv-keys 6F75C6183FF5E93D
$ echo "deb https://dl.bintray.com/spotify/deb trusty main" | sudo tee -a /etc/apt/sources.list.d/helios.list

# install Helios command-line tools
$ sudo apt-get install helios

# install Helios master (assumes you have zookeeperd installed)
$ sudo apt-get install helios-master

# install Helios agent (assumes you have Docker installed)
$ sudo apt-get install helios-agent

Note that the Helios master and agent services both try to connect to ZooKeeper at localhost:2181 by default. We recommend reading the Helios configuration & deployment guide before starting a production cluster.

Manual approach

The launcher scripts are in bin/. After you've built Helios following the instructions below, you should be able to start the agent and master:

$ bin/helios-master &
$ bin/helios-agent &

If you see any issues, make sure you have the prerequisites (Docker and Zookeeper) installed.

Build & Test

First, make sure you have Docker installed locally. If you're using OS X, we recommend using docker-machine.

Actually building Helios and running its tests should be a simple matter of running:

$ mvn clean package

For more info on setting up a development environment and an introduction to the source code, see the Developer Guide.

How it all fits together

The helios command line tool connects to your helios master via HTTP. The Helios master is connected to a Zookeeper cluster that is used both as persistent storage and as a communications channel to the agents. The helios agent is a java process that typically lives on the same host as the Docker daemon, connecting to it via a Unix socket or optionally TCP socket.

Helios is designed for high availability, with execution state being confined to a potentially highly available Zookeeper cluster. This means that several helios-master services can respond to HTTP requests concurrently, removing any single point of failure in the helios setup using straight forward HTTP load balancing strategies.

Production Readiness

We at Spotify are running Helios in production (as of October 2015) with dozens of critical backend services, so we trust it. Whether you should trust it to not cause smoking holes in your infrastructure is up to you.

Why Helios?

There are a number of Docker orchestration systems, why should you choose Helios?

  • Helios is pragmatic. We're not trying to solve everything today, but what we have, we try hard to ensure is rock-solid. So we don't have things like resource limits or dynamic scheduling yet. Today, for us, it has been more important to get the CI/CD use cases, and surrounding tooling solid first. That said, we eventually want to do dynamic scheduling, composite jobs, etc. (see below for more). But what we provide, we use (i.e. we eat our own dogfood), so you can have reasonable assurances that anything that's been in the codebase for more than a week or two is pretty solid as we release frequently (usually, at least weekly) into production here at Spotify.

  • Helios should be able to fit in the way you already do ops. Of the popular Docker orchestration frameworks, Helios is the only one we're aware of that doesn't have anything much in the way of system dependencies. That is, we don't require that you run in AWS or GCE, etc. We don't require a specific network topology. We don't require you run a specific operating system. We don't require that you're using Mesos. Our only requirement is that you have a ZooKeeper cluster somewhere and a JVM on the machines which Helios runs on. So if you're using Puppet, Chef, etc., to manage the rest of the OS install and configuration, you can still continue to do so with whatever Linux OS you're using.

  • Don't have to drink all the Kool-Aid. Generally, we try to make it so you only have to take the features you want to use, and should be able to ignore the rest. For example, Helios doesn't prescribe a discovery service: we happen to provide a plugin for SkyDNS, and we hear that someone else is working on one for another service, but if you don't want to even use a discovery service, you don't have to.

  • Scalability. We're already at hundreds of machines in production, but we're nowhere near the limit before the existing architecture would need to be revisited. Helios can also scale down well in that you can run a single machine instance if you want to run it all locally.

Other Software You Might Want To Consider

Here are a few other things you probably want to consider using alongside Helios:

  • docker-gc Garbage collects dead containers and removes unused images.
  • helios-skydns Makes it so you can auto register services in SkyDNS. If you use leading underscores in your SRV record names, let us know, we have a patch for etcd which disables the "hidden" node feature which makes this use case break.
  • skygc When using SkyDNS, especially if you're using the Helios Testing Framework, can leave garbage in the skydns tree within etcd. This will clean out dead stuff.
  • docker-maven-plugin Simplifies the building of Docker containers if you're using Maven (and most likely Java).

Findbugs

To run findbugs on the helios codebase, do mvn clean compile site. This will build helios and then run an analysis, emitting reports in helios-*/target/site/findbugs.html.

To silence an irrelevant warning, add a filter match along with a justification in findbugs-exclude.xml.

The Nickel Tour

The sources for the Helios master and agent are under helios-services. The CLI source is under helios-tools. The Helios Java client is under helios-client.

The main meat of the Helios agent is in Supervisor.java, which revolves around the lifecycle of managing individual running Docker containers.

For the master, the HTTP response handlers are in src/main/java/com/spotify/helios/master/resources.

Interactions with ZooKeeper for the agent and master are mainly in ZookeeperAgentModel.java and ZooKeeperMasterModel.java, respectively.

The Helios services use Dropwizard which is a bundle of Jetty, Jersey, Jackson, Yammer Metrics, Guava, Logback and other Java libraries.

Community Ideas

These are things we want, but haven't gotten to. If you feel inspired, we'd love to talk to you about these (in no particular order):

  • Host groups
  • ACLs - on jobs, hosts, and deployments
  • Composite jobs -- be able to deploy related containers as a unit on a machine
  • Run once jobs -- for batch jobs
  • Resource specification and enforcement -- That is: restrict my container to X MB of RAM, X CPUs, and X MB disk and perhaps other things like IOPs, network bandwidth, etc.
  • Dynamic scheduling of jobs -- either within Helios itself or as a layer on top
  • Packaging/Config for other Linux distributions such as RedHat, CoreOS, etc.