Top Related Projects
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
An orchestration platform for the development, production, and observation of data assets.
Enable Self-Service Operations: Give specific users access to your existing tools, services, and scripts
Quick Overview
Dkron is a distributed, fault-tolerant job scheduling system written in Go. It allows users to schedule and run jobs across multiple machines, providing high availability and scalability for task execution in distributed environments.
Pros
- Distributed architecture for improved reliability and scalability
- Easy to set up and use with a web UI and REST API
- Supports various execution methods (shell, HTTP, and more)
- Integrates well with existing infrastructure and tools
Cons
- Limited built-in job types compared to some other schedulers
- Documentation could be more comprehensive for advanced use cases
- Requires careful configuration for optimal performance in large-scale deployments
- Relatively smaller community compared to some more established job schedulers
Code Examples
- Creating a job using the Dkron API:
job := &dkron.Job{
Name: "my-job",
Schedule: "@every 1m",
Executor: "shell",
ExecutorConfig: map[string]string{
"command": "echo 'Hello, Dkron!'",
},
}
client.CreateJob(job)
- Retrieving job execution status:
execution, err := client.GetJobExecution("my-job", executionID)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Job status: %s\n", execution.Status)
- Triggering a job manually:
err := client.RunJob("my-job")
if err != nil {
log.Fatal(err)
}
fmt.Println("Job triggered successfully")
Getting Started
To get started with Dkron, follow these steps:
-
Install Dkron:
curl -L -O https://github.com/distribworks/dkron/releases/download/v3.2.1/dkron_3.2.1_linux_amd64.tar.gz tar -xzf dkron_3.2.1_linux_amd64.tar.gz sudo mv dkron /usr/local/bin/
-
Start a Dkron server:
dkron agent --server --bootstrap-expect 1 --node-name node1
-
Access the web UI at
http://localhost:8080
to manage jobs and view the dashboard. -
Use the REST API or Dkron client library to interact with the scheduler programmatically.
Competitor Comparisons
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
Pros of Nomad
- More comprehensive orchestration platform, supporting various workload types beyond just scheduled jobs
- Larger ecosystem and community support, being a HashiCorp product
- Advanced features like service discovery, multi-region federation, and native integration with other HashiCorp tools
Cons of Nomad
- Higher complexity and steeper learning curve compared to Dkron's focused job scheduling
- Requires more resources to run and manage due to its broader feature set
- May be overkill for simple job scheduling needs, where Dkron's lightweight approach could suffice
Code Comparison
Dkron job definition:
{
"name": "job_name",
"schedule": "0 */5 * * * *",
"command": "echo 'Hello World'"
}
Nomad job specification:
job "job_name" {
periodic {
cron = "*/5 * * * *"
}
task "task_name" {
driver = "raw_exec"
config {
command = "echo"
args = ["Hello World"]
}
}
}
Both Dkron and Nomad offer job scheduling capabilities, but Nomad provides a more extensive set of features for orchestrating various workloads. Dkron focuses specifically on distributed job scheduling, making it simpler to use for basic scheduling needs. The code comparison shows that Dkron uses a JSON format for job definitions, while Nomad employs HCL (HashiCorp Configuration Language) for more detailed job specifications.
Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Pros of Airflow
- More comprehensive workflow management system with advanced features like DAGs, operators, and sensors
- Larger community and ecosystem with extensive plugins and integrations
- Better suited for complex data pipelines and ETL processes
Cons of Airflow
- Steeper learning curve and more complex setup compared to Dkron
- Heavier resource requirements, potentially overkill for simpler scheduling needs
- Less focus on distributed execution across multiple nodes
Code Comparison
Airflow DAG example:
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime
def print_hello():
print("Hello, Airflow!")
dag = DAG('hello_world', description='Simple tutorial DAG',
schedule_interval='0 12 * * *',
start_date=datetime(2017, 3, 20), catchup=False)
hello_operator = PythonOperator(task_id='hello_task', python_callable=print_hello, dag=dag)
Dkron job configuration:
{
"name": "hello_world",
"schedule": "0 12 * * *",
"timezone": "UTC",
"owner": "example@example.com",
"executor": "shell",
"executor_config": {
"command": "echo 'Hello, Dkron!'"
}
}
Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Pros of Luigi
- More extensive and feature-rich workflow management system
- Supports a wide range of task types and data processing operations
- Large and active community with extensive documentation and examples
Cons of Luigi
- Can be complex to set up and configure for simpler use cases
- Requires Python knowledge and may have a steeper learning curve
- Heavier resource usage compared to lightweight alternatives
Code Comparison
Luigi example:
class MyTask(luigi.Task):
def requires(self):
return SomeOtherTask()
def run(self):
# Task logic here
Dkron example:
{
"name": "my-task",
"schedule": "@every 1h",
"command": "echo 'Hello World'"
}
Luigi focuses on defining tasks as Python classes with dependencies and complex workflows, while Dkron uses a simpler JSON configuration for scheduling and executing tasks. Luigi is better suited for data processing pipelines, while Dkron excels in distributed job scheduling with a more straightforward setup.
An orchestration platform for the development, production, and observation of data assets.
Pros of Dagster
- More comprehensive data orchestration platform with built-in data lineage and asset management
- Stronger support for Python-based workflows and data science use cases
- Larger and more active community, with frequent updates and extensive documentation
Cons of Dagster
- Steeper learning curve due to its more complex architecture and concepts
- Potentially overkill for simple task scheduling needs
- Requires more setup and configuration compared to lightweight alternatives
Code Comparison
Dkron job definition:
{
"name": "job1",
"schedule": "@every 1m",
"command": "echo 'Hello World'"
}
Dagster job definition:
@job
def my_job():
@op
def say_hello():
print("Hello World")
say_hello()
@schedule(cron_schedule="* * * * *", job=my_job)
def my_schedule(context):
return RunRequest(run_key=None, run_config={})
Both Dkron and Dagster offer job scheduling capabilities, but Dagster provides a more expressive and flexible Python-based approach for defining complex workflows and data pipelines. Dkron's JSON-based job definitions are simpler and more straightforward for basic task scheduling needs.
Enable Self-Service Operations: Give specific users access to your existing tools, services, and scripts
Pros of Rundeck
- More comprehensive feature set, including built-in GUI and workflow management
- Larger community and ecosystem, with extensive plugins and integrations
- Better suited for complex, enterprise-level job scheduling and automation
Cons of Rundeck
- Steeper learning curve and more complex setup compared to Dkron
- Heavier resource requirements, potentially overkill for simpler use cases
- Less focus on distributed systems and high availability out of the box
Code Comparison
Rundeck job definition (YAML):
- name: Hello World
nodeStep: true
description: A simple job
executionEnabled: true
sequence:
commands:
- exec: echo "Hello, World!"
Dkron job definition (JSON):
{
"name": "hello-world",
"schedule": "@every 1m",
"command": "echo 'Hello, World!'"
}
Both Rundeck and Dkron offer job scheduling capabilities, but Rundeck provides a more extensive YAML-based job definition with additional options, while Dkron uses a simpler JSON format focused on essential job properties.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Dkron - Distributed, fault tolerant job scheduling system for cloud native environments

Website: http://dkron.io/
Dkron is a distributed cron service, easy to setup and fault tolerant with focus in:
- Easy: Easy to use with a great UI
- Reliable: Completely fault tolerant
- Highly scalable: Able to handle high volumes of scheduled jobs and thousands of nodes
Dkron is written in Go and leverage the power of the Raft protocol and Serf for providing fault tolerance, reliability and scalability while keeping simple and easily installable.
Dkron is inspired by the google whitepaper Reliable Cron across the Planet and by Airbnb Chronos borrowing the same features from it.
Dkron runs on Linux, OSX and Windows. It can be used to run scheduled commands on a server cluster using any combination of servers for each job. It has no single points of failure due to the use of the Gossip protocol and fault tolerant distributed databases.
You can use Dkron to run the most important part of your company, scheduled jobs.
Installation
Full, comprehensive documentation is viewable on the Dkron website
Development Quick start
The best way to test and develop dkron is using docker, you will need Docker installed before proceeding.
Clone the repository.
Next, run the included Docker Compose config:
docker-compose up
This will start Dkron instances. To add more Dkron instances to the clusters:
docker-compose up --scale dkron-server=4
docker-compose up --scale dkron-agent=10
Check the port mapping using docker-compose ps
and use the browser to navigate to the Dkron dashboard using one of the ports mapped by compose.
To add jobs to the system read the API docs.
Frontend development
Dkron dashboard is built using React Admin as a single page application.
To start developing the dashboard enter the ui
directory and run npm install
to get the frontend dependencies and then start the local server with npm start
it should start a new local web server and open a new browser window serving de web ui.
Make your changes to the code, then run make ui
to generate assets files. This is a method of embedding resources in Go applications.
Resources
Chef cookbook https://supermarket.chef.io/cookbooks/dkron
Python Client Library https://github.com/oldmantaiter/pydkron
Ruby client https://github.com/jobandtalent/dkron-rb
PHP client https://github.com/gromo/dkron-php-adapter
Terraform provider https://github.com/bozerkins/terraform-provider-dkron
Manage and run jobs in Dkron from your django project https://github.com/surface-security/django-dkron
Contributors
Made with contrib.rocks.
Get in touch
- Twitter: @distribworks
- Chat: https://gitter.im/distribworks/dkron
- Email: victor at distrib.works
Top Related Projects
Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
An orchestration platform for the development, production, and observation of data assets.
Enable Self-Service Operations: Give specific users access to your existing tools, services, and scripts
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot