Convert Figma logo to code with AI

hashicorp logonomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.

14,765
1,937
14,765
1,642

Top Related Projects

28,222

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.

109,710

Production-Grade Container Scheduling and Management

5,233

Apache Mesos

Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit

8,467

Conformance test suite for OpenShift

23,209

Complete container management platform

Quick Overview

Nomad is an open-source cluster management and scheduling tool developed by HashiCorp. It provides a flexible and scalable solution for deploying and managing applications across diverse infrastructure environments, supporting both containerized and non-containerized workloads.

Pros

  • Multi-platform support: Works with containers, VMs, and standalone applications
  • Scalable and highly available: Can manage clusters of thousands of nodes
  • Easy to use and deploy: Simple setup process and intuitive CLI
  • Integrates well with other HashiCorp tools like Consul and Vault

Cons

  • Less mature ecosystem compared to Kubernetes
  • Limited built-in monitoring and observability features
  • Steeper learning curve for advanced features and custom plugins
  • Smaller community and fewer third-party integrations compared to some alternatives

Getting Started

To get started with Nomad, follow these steps:

  1. Download and install Nomad from the official website:

    https://www.nomadproject.io/downloads
    
  2. Start a development Nomad agent:

    nomad agent -dev
    
  3. Create a simple job file (e.g., example.nomad):

    job "example" {
      datacenters = ["dc1"]
      type = "service"
    
      group "cache" {
        count = 1
        task "redis" {
          driver = "docker"
          config {
            image = "redis:7"
            ports = ["db"]
          }
        }
      }
    }
    
  4. Run the job:

    nomad job run example.nomad
    
  5. Check the status of your job:

    nomad job status example
    

For more detailed information and advanced usage, refer to the official Nomad documentation.

Competitor Comparisons

28,222

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.

Pros of Consul

  • More focused on service discovery and configuration management
  • Better suited for microservices architectures
  • Provides a distributed key-value store for configuration data

Cons of Consul

  • Less comprehensive job scheduling capabilities
  • Not designed for managing long-running batch jobs
  • Limited support for complex workload orchestration

Code Comparison

Consul configuration example:

service {
  name = "web"
  port = 80
  check {
    http = "http://localhost/health"
    interval = "10s"
  }
}

Nomad job specification example:

job "webserver" {
  datacenters = ["dc1"]
  type = "service"
  group "webserver" {
    task "nginx" {
      driver = "docker"
      config {
        image = "nginx:latest"
      }
    }
  }
}

Both Consul and Nomad are HashiCorp products designed for different aspects of distributed systems management. Consul excels in service discovery and configuration management, making it ideal for microservices architectures. It provides a distributed key-value store and robust health checking capabilities. Nomad, on the other hand, focuses on workload orchestration and job scheduling across diverse environments. While Consul is better suited for service-oriented tasks, Nomad offers more comprehensive job management features, including support for long-running batch jobs and complex workload orchestration.

109,710

Production-Grade Container Scheduling and Management

Pros of Kubernetes

  • More extensive ecosystem with a wider range of tools and integrations
  • Better suited for large-scale, complex deployments across multiple clusters
  • Stronger community support and more frequent updates

Cons of Kubernetes

  • Steeper learning curve and more complex setup
  • Higher resource overhead, especially for smaller deployments
  • Can be overkill for simpler applications or smaller teams

Code Comparison

Nomad job specification:

job "webserver" {
  datacenters = ["dc1"]
  type = "service"
  group "webserver" {
    count = 3
    task "nginx" {
      driver = "docker"
      config {
        image = "nginx:latest"
      }
    }
  }
}

Kubernetes deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Both examples deploy 3 instances of an Nginx web server, but Kubernetes requires more configuration and uses a different syntax. Nomad's job specification is more concise and easier to read for simpler deployments, while Kubernetes offers more fine-grained control and extensibility for complex scenarios.

5,233

Apache Mesos

Pros of Mesos

  • More mature and battle-tested in large-scale production environments
  • Supports a wider range of workloads, including long-running services, batch jobs, and real-time analytics
  • Offers fine-grained resource allocation and isolation

Cons of Mesos

  • Steeper learning curve and more complex setup compared to Nomad
  • Requires additional frameworks (e.g., Marathon) for container orchestration
  • Less active development and community support in recent years

Code Comparison

Mesos task definition:

{
  "id": "my-task",
  "cmd": "echo hello",
  "mem": 128,
  "cpus": 0.1
}

Nomad job specification:

job "my-job" {
  task "my-task" {
    driver = "raw_exec"
    config {
      command = "echo"
      args    = ["hello"]
    }
    resources {
      cpu    = 100
      memory = 128
    }
  }
}

Both Nomad and Mesos are distributed cluster management systems, but they differ in their approach and complexity. Nomad focuses on simplicity and ease of use, while Mesos provides a more flexible and powerful framework for resource management. Nomad's job specification is more declarative and uses HCL, while Mesos typically requires JSON for task definitions. Nomad has gained popularity in recent years due to its simpler architecture and integration with HashiCorp's ecosystem, while Mesos has seen a decline in adoption despite its capabilities in handling diverse workloads at scale.

Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit

Pros of Classic Swarm

  • Tightly integrated with Docker ecosystem, making it easier for Docker users
  • Simpler setup and configuration process
  • Native support for Docker Compose files

Cons of Classic Swarm

  • Limited scalability compared to Nomad
  • Fewer advanced features and less flexibility in job scheduling
  • Discontinued project with no active development

Code Comparison

Nomad job specification:

job "web" {
  datacenters = ["dc1"]
  type = "service"
  group "frontend" {
    count = 3
    task "nginx" {
      driver = "docker"
      config {
        image = "nginx:latest"
      }
    }
  }
}

Classic Swarm service creation:

version: '3'
services:
  web:
    image: nginx:latest
    deploy:
      replicas: 3

While both systems allow for container orchestration, Nomad's job specification offers more granular control and flexibility. Classic Swarm's syntax is simpler but less powerful.

Nomad supports a wider range of workloads beyond containers, including virtual machines and executable files. It also provides more advanced scheduling capabilities and integrates well with other HashiCorp tools.

Classic Swarm, being Docker-native, offers a more straightforward experience for Docker users but lacks the advanced features and active development of Nomad. Its simplicity can be an advantage for smaller deployments or Docker-centric environments.

8,467

Conformance test suite for OpenShift

Pros of Origin

  • More comprehensive platform-as-a-service (PaaS) solution, offering a complete container application platform
  • Built on Kubernetes, providing a robust and widely-adopted container orchestration foundation
  • Extensive enterprise-grade features, including integrated CI/CD, monitoring, and security capabilities

Cons of Origin

  • Steeper learning curve due to its complexity and extensive feature set
  • Higher resource requirements for deployment and operation
  • Less flexibility for custom scheduling and resource allocation compared to Nomad's simplicity

Code Comparison

Origin (OpenShift):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app

Nomad:

job "example-app" {
  datacenters = ["dc1"]
  type = "service"

  group "app" {
    count = 3
    task "server" {
      driver = "docker"
    }
  }
}

The code snippets demonstrate the different approaches to deploying applications. Origin uses Kubernetes-style YAML manifests, while Nomad employs its own HCL-based job specification format. Origin's deployment is more declarative and Kubernetes-native, whereas Nomad's job specification offers more flexibility in defining tasks and resource allocation.

23,209

Complete container management platform

Pros of Rancher

  • Provides a comprehensive GUI for managing multiple Kubernetes clusters
  • Supports multi-cloud and hybrid cloud deployments out of the box
  • Offers built-in monitoring, logging, and security features

Cons of Rancher

  • More complex setup and maintenance compared to Nomad
  • Primarily focused on Kubernetes, limiting flexibility for non-containerized workloads
  • Steeper learning curve for users new to container orchestration

Code Comparison

Rancher (YAML configuration):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx

Nomad (HCL job specification):

job "nginx" {
  datacenters = ["dc1"]
  type = "service"
  group "nginx" {
    count = 3
    task "nginx" {
      driver = "docker"
      config {
        image = "nginx:latest"
      }
    }
  }
}

Both Rancher and Nomad provide ways to deploy and manage containerized applications, but they use different syntaxes and approaches. Rancher uses Kubernetes-native YAML configurations, while Nomad employs its own HCL-based job specifications, offering more flexibility for various workload types.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Nomad License: BUSL-1.1 Discuss

HashiCorp Nomad logo

Nomad is a simple and flexible workload orchestrator to deploy and manage containers (docker, podman), non-containerized applications (executable, Java), and virtual machines (qemu) across on-prem and clouds at scale.

Nomad is supported on Linux, Windows, and macOS. A commercial version of Nomad, Nomad Enterprise, is also available.

Nomad provides several key features:

  • Deploy Containers and Legacy Applications: Nomad’s flexibility as an orchestrator enables an organization to run containers, legacy, and batch applications together on the same infrastructure. Nomad brings core orchestration benefits to legacy applications without needing to containerize via pluggable task drivers.

  • Simple & Reliable: Nomad runs as a single binary and is entirely self contained - combining resource management and scheduling into a single system. Nomad does not require any external services for storage or coordination. Nomad automatically handles application, node, and driver failures. Nomad is distributed and resilient, using leader election and state replication to provide high availability in the event of failures.

  • Device Plugins & GPU Support: Nomad offers built-in support for GPU workloads such as machine learning (ML) and artificial intelligence (AI). Nomad uses device plugins to automatically detect and utilize resources from hardware devices such as GPU, FPGAs, and TPUs.

  • Federation for Multi-Region, Multi-Cloud: Nomad was designed to support infrastructure at a global scale. Nomad supports federation out-of-the-box and can deploy applications across multiple regions and clouds.

  • Proven Scalability: Nomad is optimistically concurrent, which increases throughput and reduces latency for workloads. Nomad has been proven to scale to clusters of 10K+ nodes in real-world production environments.

  • HashiCorp Ecosystem: Nomad integrates seamlessly with Terraform, Consul, Vault for provisioning, service discovery, and secrets management.

Quick Start

Testing

See Developer: Getting Started for instructions on setting up a local Nomad cluster for non-production use.

Optionally, find Terraform manifests for bringing up a development Nomad cluster on a public cloud in the terraform directory.

Production

See Developer: Nomad Reference Architecture for recommended practices and a reference architecture for production deployments.

Documentation

Full, comprehensive documentation is available on the Nomad website: https://developer.hashicorp.com/nomad/docs

Guides are available on HashiCorp Developer.

Roadmap

A timeline of major features expected for the next release or two can be found in the Public Roadmap.

This roadmap is a best guess at any given point, and both release dates and projects in each release are subject to change. Do not take any of these items as commitments, especially ones later than one major release away.

Contributing

See the contributing directory for more developer documentation.