Convert Figma logo to code with AI

moby logoswarmkit

A toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

3,336
612
3,336
311

Top Related Projects

:warning: This repository is deprecated and will be archived (Docker CE itself is NOT deprecated) see the https://github.com/docker/docker-ce/blob/master/README.md :warning:

109,710

Production-Grade Container Scheduling and Management

14,765

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.

5,233

Apache Mesos

23,209

Complete container management platform

8,467

Conformance test suite for OpenShift

Quick Overview

SwarmKit is an open-source toolkit for orchestrating distributed systems at any scale. It is the foundation for Docker Swarm mode, providing a complete system for managing and coordinating distributed applications across multiple nodes. SwarmKit includes features like service discovery, load balancing, and distributed consensus.

Pros

  • Highly scalable and fault-tolerant architecture
  • Built-in security features, including TLS encryption and node identity management
  • Flexible scheduling and placement strategies for containers
  • Seamless integration with Docker ecosystem

Cons

  • Steeper learning curve compared to simpler orchestration tools
  • Limited support for non-Docker container runtimes
  • Less extensive ecosystem and third-party integrations compared to Kubernetes
  • May be overkill for small-scale deployments

Code Examples

  1. Creating a new Swarm cluster:
import "github.com/docker/swarmkit/api"

// Create a new cluster
cluster, err := client.NewCluster(api.ClusterSpec{
    Annotations: api.Annotations{
        Name: "my-swarm-cluster",
    },
})
  1. Creating a new service:
import "github.com/docker/swarmkit/api"

// Create a new service
service, err := client.CreateService(context.Background(), &api.ServiceSpec{
    Annotations: api.Annotations{
        Name: "my-web-service",
    },
    Task: api.TaskSpec{
        Runtime: &api.TaskSpec_Container{
            Container: &api.ContainerSpec{
                Image: "nginx:latest",
            },
        },
    },
    Mode: &api.ServiceSpec_Replicated{
        Replicated: &api.ReplicatedService{
            Replicas: 3,
        },
    },
})
  1. Updating an existing service:
import "github.com/docker/swarmkit/api"

// Update an existing service
service, err := client.UpdateService(context.Background(), &api.UpdateServiceRequest{
    ServiceID: "service-id",
    ServiceVersion: &api.Version{Index: 1},
    Spec: &api.ServiceSpec{
        Annotations: api.Annotations{
            Name: "updated-service-name",
        },
        Task: api.TaskSpec{
            Runtime: &api.TaskSpec_Container{
                Container: &api.ContainerSpec{
                    Image: "nginx:1.21",
                },
            },
        },
    },
})

Getting Started

To get started with SwarmKit:

  1. Clone the repository:

    git clone https://github.com/moby/swarmkit.git
    
  2. Build SwarmKit:

    cd swarmkit
    make
    
  3. Run SwarmKit:

    ./bin/swarmctl cluster create
    ./bin/swarmd
    

For more detailed instructions and usage examples, refer to the SwarmKit documentation and examples in the repository.

Competitor Comparisons

:warning: This repository is deprecated and will be archived (Docker CE itself is NOT deprecated) see the https://github.com/docker/docker-ce/blob/master/README.md :warning:

Pros of docker-ce

  • More comprehensive Docker ecosystem integration
  • Includes Docker Engine, CLI, and additional tools
  • Broader community support and documentation

Cons of docker-ce

  • Larger codebase, potentially more complex to maintain
  • May include features not needed for specific use cases
  • Slower release cycle compared to SwarmKit

Code Comparison

SwarmKit (cluster management):

func (n *Node) run(ctx context.Context) (err error) {
    defer func() {
        if err != nil {
            if err := n.Stop(ctx); err != nil {
                log.G(ctx).WithError(err).Error("failed to shut down node")
            }
        }
    }()
    // ... (additional code)
}

docker-ce (Docker Engine):

func (daemon *Daemon) containerStart(container *container.Container, checkpoint string, checkpointDir string, resetRestartManager bool) (err error) {
    container.Lock()
    defer container.Unlock()
    if container.Running {
        return nil
    }
    // ... (additional code)
}

SwarmKit focuses on cluster management and orchestration, while docker-ce provides a more comprehensive Docker platform. SwarmKit's code emphasizes node management, while docker-ce's code deals with container lifecycle management within the Docker Engine.

109,710

Production-Grade Container Scheduling and Management

Pros of Kubernetes

  • More extensive ecosystem with a wider range of tools and integrations
  • Better suited for large-scale, complex deployments across multiple clusters
  • More flexible and customizable with support for various container runtimes

Cons of Kubernetes

  • Steeper learning curve and more complex setup compared to SwarmKit
  • Higher resource overhead, especially for smaller deployments
  • Can be overkill for simple applications or small-scale deployments

Code Comparison

SwarmKit:

version: '3'
services:
  web:
    image: nginx
    deploy:
      replicas: 3

Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

SwarmKit uses a simpler Docker Compose-like syntax for defining services, while Kubernetes requires more verbose YAML configurations. Kubernetes offers more granular control over deployments and resources, but at the cost of increased complexity.

Both projects aim to provide container orchestration, but Kubernetes has become the de facto standard for large-scale deployments, while SwarmKit remains a simpler alternative for Docker-native environments.

14,765

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.

Pros of Nomad

  • Supports a wider range of workloads (containers, VMs, and standalone applications)
  • More flexible scheduling with support for batch and service jobs
  • Simpler architecture and easier to set up in multi-region deployments

Cons of Nomad

  • Less tightly integrated with Docker ecosystem
  • Smaller community and ecosystem compared to SwarmKit
  • Lacks some advanced features like built-in secret management

Code Comparison

SwarmKit service deployment:

version: '3'
services:
  web:
    image: nginx
    deploy:
      replicas: 3

Nomad job specification:

job "webserver" {
  datacenters = ["dc1"]
  type = "service"
  group "web" {
    count = 3
    task "nginx" {
      driver = "docker"
      config {
        image = "nginx"
      }
    }
  }
}

Both examples deploy 3 replicas of an Nginx web server, but Nomad's job specification offers more granular control over task placement and resource allocation. SwarmKit's syntax is more concise and Docker-centric, while Nomad's HCL format provides greater flexibility for different workload types.

5,233

Apache Mesos

Pros of Mesos

  • More mature and battle-tested in large-scale production environments
  • Supports a wider range of workloads, including long-running services, batch jobs, and custom frameworks
  • Offers fine-grained resource allocation and isolation

Cons of Mesos

  • Steeper learning curve and more complex setup compared to SwarmKit
  • Less tightly integrated with Docker ecosystem
  • Requires additional components (e.g., Marathon) for container orchestration

Code Comparison

Mesos (resource offer):

{
  "id": "12220-3440-12532-O12",
  "framework_id": "20150916-154057-1243151",
  "slave_id": "20150916-154057-1243152",
  "resources": [
    {"name": "cpus", "type": "SCALAR", "scalar": {"value": 2}},
    {"name": "mem", "type": "SCALAR", "scalar": {"value": 1024}}
  ]
}

SwarmKit (service creation):

service, err := client.ServiceCreate(context.Background(), swarm.ServiceSpec{
    TaskTemplate: swarm.TaskSpec{
        ContainerSpec: &swarm.ContainerSpec{
            Image: "nginx:latest",
        },
    },
    Mode: swarm.ServiceMode{Replicated: &swarm.ReplicatedService{Replicas: &replicas}},
}, types.ServiceCreateOptions{})

Both projects aim to solve container orchestration challenges, but SwarmKit focuses on Docker-native clustering, while Mesos provides a more general-purpose resource management platform.

23,209

Complete container management platform

Pros of Rancher

  • More comprehensive container management platform with a user-friendly web UI
  • Supports multiple orchestration engines (Kubernetes, Docker Swarm, Mesos)
  • Offers built-in monitoring, logging, and security features

Cons of Rancher

  • Higher resource overhead due to its full-featured nature
  • Steeper learning curve for users new to container orchestration
  • May be overkill for simple deployments or small-scale projects

Code Comparison

SwarmKit (Go):

func (n *Node) run(ctx context.Context) (err error) {
    defer func() {
        if err != nil {
            if err := n.Stop(ctx); err != nil {
                log.G(ctx).WithError(err).Error("failed to shut down node")
            }
        }
    }()
    // ... (additional code)
}

Rancher (Go):

func (c *Cluster) Start(ctx context.Context) error {
    c.Lock()
    defer c.Unlock()

    if c.Driver == nil {
        return fmt.Errorf("cluster driver is nil")
    }
    // ... (additional code)
}

Both projects use Go and follow similar patterns for error handling and context management. SwarmKit focuses on node management within a swarm, while Rancher's code reflects its broader scope of cluster management across different orchestration platforms.

8,467

Conformance test suite for OpenShift

Pros of Origin

  • More comprehensive platform-as-a-service (PaaS) solution with built-in CI/CD, monitoring, and logging
  • Enterprise-ready with advanced security features and multi-tenancy support
  • Extensive ecosystem and community support as part of the Red Hat OpenShift product

Cons of Origin

  • Steeper learning curve due to increased complexity
  • Higher resource requirements for deployment and operation
  • Less flexibility for custom container orchestration configurations

Code Comparison

Origin (Kubernetes-based):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app

SwarmKit (Docker Swarm-based):

version: '3'
services:
  example-app:
    image: example-image
    deploy:
      replicas: 3

Origin uses Kubernetes-native YAML for defining deployments, while SwarmKit uses Docker Compose syntax for service definitions. Origin's approach offers more granular control but can be more verbose, whereas SwarmKit's syntax is simpler but may lack some advanced features.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

SwarmKit

PkgGoDev CI Status Go Report Card codecov

SwarmKit is a toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

Its main benefits are:

  • Distributed: SwarmKit uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.
  • Secure: Node communication and membership within a Swarm are secure out of the box. SwarmKit uses mutual TLS for node authentication, role authorization and transport encryption, automating both certificate issuance and rotation.
  • Simple: SwarmKit is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate.

Overview

Machines running SwarmKit can be grouped together in order to form a Swarm, coordinating tasks with each other. Once a machine joins, it becomes a Swarm Node. Nodes can either be worker nodes or manager nodes.

  • Worker Nodes are responsible for running Tasks using an Executor. SwarmKit comes with a default Docker Container Executor that can be easily swapped out.
  • Manager Nodes on the other hand accept specifications from the user and are responsible for reconciling the desired state with the actual cluster state.

An operator can dynamically update a Node's role by promoting a Worker to Manager or demoting a Manager to Worker.

Tasks are organized in Services. A service is a higher level abstraction that allows the user to declare the desired state of a group of tasks. Services define what type of task should be created as well as how to execute them (e.g. run this many replicas at all times) and how to update them (e.g. rolling updates).

Features

Some of SwarmKit's main features are:

  • Orchestration

    • Desired State Reconciliation: SwarmKit constantly compares the desired state against the current cluster state and reconciles the two if necessary. For instance, if a node fails, SwarmKit reschedules its tasks onto a different node.

    • Service Types: There are different types of services. The project currently ships with two of them out of the box

      • Replicated Services are scaled to the desired number of replicas.
      • Global Services run one task on every available node in the cluster.
    • Configurable Updates: At any time, you can change the value of one or more fields for a service. After you make the update, SwarmKit reconciles the desired state by ensuring all tasks are using the desired settings. By default, it performs a lockstep update - that is, update all tasks at the same time. This can be configured through different knobs:

      • Parallelism defines how many updates can be performed at the same time.
      • Delay sets the minimum delay between updates. SwarmKit will start by shutting down the previous task, bring up a new one, wait for it to transition to the RUNNING state then wait for the additional configured delay. Finally, it will move onto other tasks.
    • Restart Policies: The orchestration layer monitors tasks and reacts to failures based on the specified policy. The operator can define restart conditions, delays and limits (maximum number of attempts in a given time window). SwarmKit can decide to restart a task on a different machine. This means that faulty nodes will gradually be drained of their tasks.

  • Scheduling

    • Resource Awareness: SwarmKit is aware of resources available on nodes and will place tasks accordingly.

    • Constraints: Operators can limit the set of nodes where a task can be scheduled by defining constraint expressions. Multiple constraints find nodes that satisfy every expression, i.e., an AND match. Constraints can match node attributes in the following table. Note that engine.labels are collected from Docker Engine with information like operating system, drivers, etc. node.labels are added by cluster administrators for operational purpose. For example, some nodes have security compliant labels to run tasks with compliant requirements.

      node attributematchesexample
      node.idnode's IDnode.id == 2ivku8v2gvtg4
      node.hostnamenode's hostnamenode.hostname != node-2
      node.ipnode's IP addressnode.ip != 172.19.17.0/24
      node.rolenode's manager or worker rolenode.role == manager
      node.platform.osnode's operating systemnode.platform.os == linux
      node.platform.archnode's architecturenode.platform.arch == x86_64
      node.labelsnode's labels added by cluster adminsnode.labels.security == high
      engine.labelsDocker Engine's labelsengine.labels.operatingsystem == ubuntu 14.04
    • Strategies: The project currently ships with a spread strategy which will attempt to schedule tasks on the least loaded nodes, provided they meet the constraints and resource requirements.

  • Cluster Management

    • State Store: Manager nodes maintain a strongly consistent, replicated (Raft based) and extremely fast (in-memory reads) view of the cluster which allows them to make quick scheduling decisions while tolerating failures.
    • Topology Management: Node roles (Worker / Manager) can be dynamically changed through API/CLI calls.
    • Node Management: An operator can alter the desired availability of a node: Setting it to Paused will prevent any further tasks from being scheduled to it while Drained will have the same effect while also re-scheduling its tasks somewhere else (mostly for maintenance scenarios).
  • Security

    • Mutual TLS: All nodes communicate with each other using mutual TLS. Swarm managers act as a Root Certificate Authority, issuing certificates to new nodes.
    • Token-based Join: All nodes require a cryptographic token to join the swarm, which defines that node's role. Tokens can be rotated as often as desired without affecting already-joined nodes.
    • Certificate Rotation: TLS Certificates are rotated and reloaded transparently on every node, allowing a user to set how frequently rotation should happen (the current default is 3 months, the minimum is 30 minutes).

Build

Requirements:

SwarmKit is built in Go and leverages a standard project structure to work well with Go tooling. If you are new to Go, please see BUILDING.md for a more detailed guide.

Once you have SwarmKit checked out in your $GOPATH, the Makefile can be used for common tasks.

From the project root directory, run the following to build swarmd and swarmctl:

$ make binaries

Test

Before running tests for the first time, setup the tooling:

$ make setup

Then run:

$ make all

Usage Examples

Setting up a Swarm

These instructions assume that swarmd and swarmctl are in your PATH.

(Before starting, make sure /tmp/node-N don't exist)

Initialize the first node:

$ swarmd -d /tmp/node-1 --listen-control-api /tmp/node-1/swarm.sock --hostname node-1

Before joining cluster, the token should be fetched:

$ export SWARM_SOCKET=/tmp/node-1/swarm.sock  
$ swarmctl cluster inspect default  
ID          : 87d2ecpg12dfonxp3g562fru1
Name        : default
Orchestration settings:
  Task history entries: 5
Dispatcher settings:
  Dispatcher heartbeat period: 5s
Certificate Authority settings:
  Certificate Validity Duration: 2160h0m0s
  Join Tokens:
    Worker: SWMTKN-1-3vi7ajem0jed8guusgvyl98nfg18ibg4pclify6wzac6ucrhg3-0117z3s2ytr6egmmnlr6gd37n
    Manager: SWMTKN-1-3vi7ajem0jed8guusgvyl98nfg18ibg4pclify6wzac6ucrhg3-d1ohk84br3ph0njyexw0wdagx

In two additional terminals, join two nodes. From the example below, replace 127.0.0.1:4242 with the address of the first node, and use the <Worker Token> acquired above. In this example, the <Worker Token> is SWMTKN-1-3vi7ajem0jed8guusgvyl98nfg18ibg4pclify6wzac6ucrhg3-0117z3s2ytr6egmmnlr6gd37n. If the joining nodes run on the same host as node-1, select a different remote listening port, e.g., --listen-remote-api 127.0.0.1:4343.

$ swarmd -d /tmp/node-2 --hostname node-2 --join-addr 127.0.0.1:4242 --join-token <Worker Token>
$ swarmd -d /tmp/node-3 --hostname node-3 --join-addr 127.0.0.1:4242 --join-token <Worker Token>

If joining as a manager, also specify the listen-control-api.

$ swarmd -d /tmp/node-4 --hostname node-4 --join-addr 127.0.0.1:4242 --join-token <Manager Token> --listen-control-api /tmp/node-4/swarm.sock --listen-remote-api 127.0.0.1:4245

In a fourth terminal, use swarmctl to explore and control the cluster. Before running swarmctl, set the SWARM_SOCKET environment variable to the path of the manager socket that was specified in --listen-control-api when starting the manager.

To list nodes:

$ export SWARM_SOCKET=/tmp/node-1/swarm.sock
$ swarmctl node ls
ID                         Name    Membership  Status  Availability  Manager Status
--                         ----    ----------  ------  ------------  --------------
3x12fpoi36eujbdkgdnbvbi6r  node-2  ACCEPTED    READY   ACTIVE
4spl3tyipofoa2iwqgabsdcve  node-1  ACCEPTED    READY   ACTIVE        REACHABLE *
dknwk1uqxhnyyujq66ho0h54t  node-3  ACCEPTED    READY   ACTIVE
zw3rwfawdasdewfq66ho34eaw  node-4  ACCEPTED    READY   ACTIVE        REACHABLE


Creating Services

Start a redis service:

$ swarmctl service create --name redis --image redis:3.0.5
08ecg7vc7cbf9k57qs722n2le

List the running services:

$ swarmctl service ls
ID                         Name   Image        Replicas
--                         ----   -----        --------
08ecg7vc7cbf9k57qs722n2le  redis  redis:3.0.5  1/1

Inspect the service:

$ swarmctl service inspect redis
ID                : 08ecg7vc7cbf9k57qs722n2le
Name              : redis
Replicas          : 1/1
Template
 Container
  Image           : redis:3.0.5

Task ID                      Service    Slot    Image          Desired State    Last State                Node
-------                      -------    ----    -----          -------------    ----------                ----
0xk1ir8wr85lbs8sqg0ug03vr    redis      1       redis:3.0.5    RUNNING          RUNNING 1 minutes ago    node-1

Updating Services

You can update any attribute of a service.

For example, you can scale the service by changing the instance count:

$ swarmctl service update redis --replicas 6
08ecg7vc7cbf9k57qs722n2le

$ swarmctl service inspect redis
ID                : 08ecg7vc7cbf9k57qs722n2le
Name              : redis
Replicas          : 6/6
Template
 Container
  Image           : redis:3.0.5

Task ID                      Service    Slot    Image          Desired State    Last State                Node
-------                      -------    ----    -----          -------------    ----------                ----
0xk1ir8wr85lbs8sqg0ug03vr    redis      1       redis:3.0.5    RUNNING          RUNNING 3 minutes ago    node-1
25m48y9fevrnh77til1d09vqq    redis      2       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-3
42vwc8z93c884anjgpkiatnx6    redis      3       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-2
d41f3wnf9dex3mk6jfqp4tdjw    redis      4       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-2
66lefnooz63met6yfrsk6myvg    redis      5       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-1
3a2sawtoyk19wqhmtuiq7z9pt    redis      6       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-3

Changing replicas from 1 to 6 forced SwarmKit to create 5 additional Tasks in order to comply with the desired state.

Every other field can be changed as well, such as image, args, env, ...

Let's change the image from redis:3.0.5 to redis:3.0.6 (e.g. upgrade):

$ swarmctl service update redis --image redis:3.0.6
08ecg7vc7cbf9k57qs722n2le

$ swarmctl service inspect redis
ID                   : 08ecg7vc7cbf9k57qs722n2le
Name                 : redis
Replicas             : 6/6
Update Status
 State               : COMPLETED
 Started             : 3 minutes ago
 Completed           : 1 minute ago
 Message             : update completed
Template
 Container
  Image              : redis:3.0.6

Task ID                      Service    Slot    Image          Desired State    Last State              Node
-------                      -------    ----    -----          -------------    ----------              ----
0udsjss61lmwz52pke5hd107g    redis      1       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-3
b8o394v840thk10tamfqlwztb    redis      2       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-1
efw7j66xqpoj3cn3zjkdrwff7    redis      3       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-3
8ajeipzvxucs3776e4z8gemey    redis      4       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-2
f05f2lbqzk9fh4kstwpulygvu    redis      5       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-2
7sbpoy82deq7hu3q9cnucfin6    redis      6       redis:3.0.6    RUNNING          RUNNING 1 minute ago    node-1

By default, all tasks are updated at the same time.

This behavior can be changed by defining update options.

For instance, in order to update tasks 2 at a time and wait at least 10 seconds between updates:

$ swarmctl service update redis --image redis:3.0.7 --update-parallelism 2 --update-delay 10s
$ watch -n1 "swarmctl service inspect redis"  # watch the update

This will update 2 tasks, wait for them to become RUNNING, then wait an additional 10 seconds before moving to other tasks.

Update options can be set at service creation and updated later on. If an update command doesn't specify update options, the last set of options will be used.

Node Management

SwarmKit monitors node health. In the case of node failures, it re-schedules tasks to other nodes.

An operator can manually define the Availability of a node and can Pause and Drain nodes.

Let's put node-1 into maintenance mode:

$ swarmctl node drain node-1

$ swarmctl node ls
ID                         Name    Membership  Status  Availability  Manager Status
--                         ----    ----------  ------  ------------  --------------
3x12fpoi36eujbdkgdnbvbi6r  node-2  ACCEPTED    READY   ACTIVE
4spl3tyipofoa2iwqgabsdcve  node-1  ACCEPTED    READY   DRAIN         REACHABLE *
dknwk1uqxhnyyujq66ho0h54t  node-3  ACCEPTED    READY   ACTIVE

$ swarmctl service inspect redis
ID                   : 08ecg7vc7cbf9k57qs722n2le
Name                 : redis
Replicas             : 6/6
Update Status
 State               : COMPLETED
 Started             : 2 minutes ago
 Completed           : 1 minute ago
 Message             : update completed
Template
 Container
  Image              : redis:3.0.7

Task ID                      Service    Slot    Image          Desired State    Last State                Node
-------                      -------    ----    -----          -------------    ----------                ----
8uy2fy8dqbwmlvw5iya802tj0    redis      1       redis:3.0.7    RUNNING          RUNNING 23 seconds ago    node-2
7h9lgvidypcr7q1k3lfgohb42    redis      2       redis:3.0.7    RUNNING          RUNNING 2 minutes ago     node-3
ae4dl0chk3gtwm1100t5yeged    redis      3       redis:3.0.7    RUNNING          RUNNING 23 seconds ago    node-3
9fz7fxbg0igypstwliyameobs    redis      4       redis:3.0.7    RUNNING          RUNNING 2 minutes ago     node-3
drzndxnjz3c8iujdewzaplgr6    redis      5       redis:3.0.7    RUNNING          RUNNING 23 seconds ago    node-2
7rcgciqhs4239quraw7evttyf    redis      6       redis:3.0.7    RUNNING          RUNNING 2 minutes ago     node-2

As you can see, every Task running on node-1 was rebalanced to either node-2 or node-3 by the reconciliation loop.