Top Related Projects
An open-source runtime for composable workflows. Great for AI agents and CI/CD.
Workflow Engine for Kubernetes
Run your GitHub Actions locally 🚀
Harness Open Source is an end-to-end developer platform with Source Control Management, CI/CD Pipelines, Hosted Developer Environments, and Artifact Registries.
Concourse is a container-based automation system written in Go.
Quick Overview
Badger is an open-source project management tool designed for software development teams. It offers a streamlined interface for task tracking, sprint planning, and team collaboration, with a focus on simplicity and ease of use.
Pros
- Intuitive user interface, reducing the learning curve for new team members
- Customizable workflows to fit various development methodologies
- Seamless integration with popular version control systems like Git
- Real-time collaboration features for improved team communication
Cons
- Limited advanced reporting and analytics capabilities compared to some enterprise-level alternatives
- Fewer third-party integrations available than more established project management tools
- May lack some specialized features required for non-software development projects
- Documentation could be more comprehensive for advanced use cases
Code Examples
As Badger is a project management tool and not a code library, there are no code examples to provide.
Getting Started
As Badger is a project management tool and not a code library, there are no specific code-based getting started instructions. However, users can typically get started by:
- Signing up for an account on the Badger website
- Creating a new project
- Inviting team members
- Setting up project boards and workflows
- Adding tasks and assigning them to team members
For detailed instructions, users should refer to the official documentation on the Badger website or GitHub repository.
Competitor Comparisons
An open-source runtime for composable workflows. Great for AI agents and CI/CD.
Pros of Dagger
- More mature project with a larger community and ecosystem
- Supports multiple programming languages (Go, Python, TypeScript)
- Provides a unified CI/CD experience across different platforms
Cons of Dagger
- Steeper learning curve due to its more complex architecture
- Requires Docker to be installed and running on the host system
- May be overkill for smaller projects or simpler CI/CD needs
Code Comparison
Badger (JavaScript):
import { Badger } from '@hypermode/badger';
const badger = new Badger();
await badger.run('npm install');
await badger.run('npm test');
Dagger (Go):
import (
"dagger.io/dagger"
"context"
)
func main() {
ctx := context.Background()
client, err := dagger.Connect(ctx)
defer client.Close()
_, err = client.Container().
From("node:16").
WithExec([]string{"npm", "install"}).
WithExec([]string{"npm", "test"}).
Stdout(ctx)
}
Both Badger and Dagger aim to simplify CI/CD processes, but they take different approaches. Badger focuses on simplicity and ease of use, particularly for JavaScript projects, while Dagger offers a more comprehensive and flexible solution for complex CI/CD pipelines across multiple languages and platforms.
Workflow Engine for Kubernetes
Pros of Argo Workflows
- More mature and widely adopted project with a larger community
- Supports complex workflow orchestration with DAGs and advanced features
- Integrates well with Kubernetes and cloud-native ecosystems
Cons of Argo Workflows
- Steeper learning curve due to its complexity and feature-rich nature
- Requires Kubernetes infrastructure, which may be overkill for simpler use cases
Code Comparison
Argo Workflows (YAML):
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: hello-world
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["Hello World"]
Badger (Python):
from badger import Workflow
workflow = Workflow("hello-world")
@workflow.task()
def say_hello():
print("Hello World")
workflow.run()
The code comparison shows that Argo Workflows uses YAML for defining workflows, while Badger uses Python decorators. Argo Workflows is more verbose and Kubernetes-oriented, whereas Badger offers a simpler, Python-native approach to defining workflows.
Run your GitHub Actions locally 🚀
Pros of act
- Allows running GitHub Actions locally, enabling easier testing and debugging
- Supports running actions in Docker containers, closely mimicking GitHub's environment
- Has a larger community and more frequent updates
Cons of act
- Limited to running GitHub Actions workflows only
- May not fully replicate all GitHub Actions features and behaviors
- Requires Docker to be installed and running
Code comparison
act:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run tests
run: npm test
Badger:
tasks:
test:
image: node:14
steps:
- checkout
- run: npm test
Summary
act focuses specifically on running GitHub Actions workflows locally, while Badger is a more general-purpose CI/CD tool. act provides a closer simulation of the GitHub Actions environment, but is limited to that ecosystem. Badger offers more flexibility in terms of CI/CD configurations and can be used with various version control systems and deployment targets. The choice between the two depends on whether you need to test GitHub Actions specifically or want a more versatile CI/CD solution.
Harness Open Source is an end-to-end developer platform with Source Control Management, CI/CD Pipelines, Hosted Developer Environments, and Artifact Registries.
Pros of Harness
- More comprehensive feature set for continuous delivery and deployment
- Larger community and ecosystem with extensive documentation
- Supports a wider range of integrations with popular tools and platforms
Cons of Harness
- More complex setup and configuration process
- Steeper learning curve for new users
- Higher resource requirements for running the platform
Code Comparison
Harness (YAML configuration):
pipeline:
name: My Pipeline
identifier: My_Pipeline
projectIdentifier: MyProject
orgIdentifier: default
tags: {}
stages:
- stage:
name: Build
Badger (JavaScript configuration):
const pipeline = new Pipeline({
name: 'My Pipeline',
stages: [
new Stage({
name: 'Build',
// Stage configuration
}),
],
});
While both repositories focus on CI/CD and deployment automation, they differ in approach and scope. Harness offers a more comprehensive platform with a wider range of features, while Badger appears to be a more lightweight and focused tool. The code comparison shows that Harness uses YAML for configuration, while Badger uses JavaScript, which may appeal to different developer preferences.
Concourse is a container-based automation system written in Go.
Pros of Concourse
- More mature and widely adopted CI/CD platform with a larger community
- Offers a web-based UI for pipeline visualization and management
- Supports a wide range of integrations and resources out-of-the-box
Cons of Concourse
- Steeper learning curve due to its unique concepts and terminology
- Requires more infrastructure setup and maintenance
- Can be resource-intensive for smaller projects or teams
Code Comparison
Concourse pipeline configuration (YAML):
jobs:
- name: hello-world
plan:
- task: say-hello
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
run:
path: echo
args: ["Hello, world!"]
Badger workflow configuration (YAML):
jobs:
hello-world:
steps:
- run: echo "Hello, world!"
While both projects use YAML for configuration, Concourse's syntax is more verbose and structured, reflecting its more complex feature set. Badger's configuration is simpler and more straightforward, which may be preferable for smaller projects or teams new to CI/CD.
Concourse is better suited for large-scale, complex pipelines with multiple integrations, while Badger might be more appropriate for simpler workflows or teams looking for a lightweight solution with a gentler learning curve.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
BadgerDB
BadgerDB is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for Dgraph, a fast, distributed graph database. It's meant to be a performant alternative to non-Go-based key-value stores like RocksDB.
Project Status
Badger is stable and is being used to serve data sets worth hundreds of terabytes. Badger supports
concurrent ACID transactions with serializable snapshot isolation (SSI) guarantees. A Jepsen-style
bank test runs nightly for 8h, with --race
flag and ensures the maintenance of transactional
guarantees. Badger has also been tested to work with filesystem level anomalies, to ensure
persistence and consistency. Badger is being used by a number of projects which includes Dgraph,
Jaeger Tracing, UsenetExpress, and many more.
The list of projects using Badger can be found here.
Badger v1.0 was released in Nov 2017, and the latest version that is data-compatible with v1.0 is v1.6.0.
Badger v2.0 was released in Nov 2019 with a new storage format which won't be compatible with all of the v1.x. Badger v2.0 supports compression, encryption and uses a cache to speed up lookup.
Badger v3.0 was released in January 2021. This release improves compaction performance.
Please consult the Changelog for more detailed information on releases.
For more details on our version naming schema please read Choosing a version.
Table of Contents
Getting Started
Installing
To start using Badger, install Go 1.21 or above. Badger v3 and above needs go modules. From your project, run the following command
go get github.com/dgraph-io/badger/v4
This will retrieve the library.
Installing Badger Command Line Tool
Badger provides a CLI tool which can perform certain operations like offline backup/restore. To install the Badger CLI, retrieve the repository and checkout the desired version. Then run
cd badger
go install .
This will install the badger command line utility into your $GOBIN path.
Choosing a version
BadgerDB is a pretty special package from the point of view that the most important change we can make to it is not on its API but rather on how data is stored on disk.
This is why we follow a version naming schema that differs from Semantic Versioning.
- New major versions are released when the data format on disk changes in an incompatible way.
- New minor versions are released whenever the API changes but data compatibility is maintained. Note that the changes on the API could be backward-incompatible - unlike Semantic Versioning.
- New patch versions are released when there's no changes to the data format nor the API.
Following these rules:
- v1.5.0 and v1.6.0 can be used on top of the same files without any concerns, as their major version is the same, therefore the data format on disk is compatible.
- v1.6.0 and v2.0.0 are data incompatible as their major version implies, so files created with v1.6.0 will need to be converted into the new format before they can be used by v2.0.0.
- v2.x.x and v3.x.x are data incompatible as their major version implies, so files created with v2.x.x will need to be converted into the new format before they can be used by v3.0.0.
For a longer explanation on the reasons behind using a new versioning naming schema, you can read VERSIONING.
Badger Documentation
Badger Documentation is available at https://docs.hypermode.com/badger
Resources
Blog Posts
- Introducing Badger: A fast key-value store written natively in Go
- Make Badger crash resilient with ALICE
- Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go
- Concurrent ACID Transactions in Badger
Design
Badger was written with these design goals in mind:
- Write a key-value database in pure Go.
- Use latest research to build the fastest KV database for data sets spanning terabytes.
- Optimize for SSDs.
Badgerâs design is based on a paper titled WiscKey: Separating Keys from Values in SSD-conscious Storage.
Comparisons
Feature | Badger | RocksDB | BoltDB |
---|---|---|---|
Design | LSM tree with value log | LSM tree only | B+ tree |
High Read throughput | Yes | No | Yes |
High Write throughput | Yes | Yes | No |
Designed for SSDs | Yes (with latest research 1) | Not specifically 2 | No |
Embeddable | Yes | Yes | Yes |
Sorted KV access | Yes | Yes | Yes |
Pure Go (no Cgo) | Yes | No | Yes |
Transactions | Yes, ACID, concurrent with SSI3 | Yes (but non-ACID) | Yes, ACID |
Snapshots | Yes | Yes | Yes |
TTL support | Yes | Yes | No |
3D access (key-value-version) | Yes4 | No | No |
1 The WISCKEY paper (on which Badger is based) saw big wins with separating values from keys, significantly reducing the write amplification compared to a typical LSM tree.
2 RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks. As such RocksDB's design isn't aimed at SSDs.
3 SSI: Serializable Snapshot Isolation. For more details, see the blog post Concurrent ACID Transactions in Badger
4 Badger provides direct access to value versions via its Iterator API. Users can also specify how many versions to keep per key via Options.
Benchmarks
We have run comprehensive benchmarks against RocksDB, Bolt and LMDB. The benchmarking code, and the detailed logs for the benchmarks can be found in the badger-bench repo. More explanation, including graphs can be found the blog posts (linked above).
Projects Using Badger
Below is a list of known projects that use Badger:
- Dgraph - Distributed graph database.
- Jaeger - Distributed tracing platform.
- go-ipfs - Go client for the InterPlanetary File System (IPFS), a new hypermedia distribution protocol.
- Riot - An open-source, distributed search engine.
- emitter - Scalable, low latency, distributed pub/sub broker with message storage, uses MQTT, gossip and badger.
- OctoSQL - Query tool that allows you to join, analyse and transform data from multiple databases using SQL.
- Dkron - Distributed, fault tolerant job scheduling system.
- smallstep/certificates - Step-ca is an online certificate authority for secure, automated certificate management.
- Sandglass - distributed, horizontally scalable, persistent, time sorted message queue.
- TalariaDB - Grab's Distributed, low latency time-series database.
- Sloop - Salesforce's Kubernetes History Visualization Project.
- Usenet Express - Serving over 300TB of data with Badger.
- gorush - A push notification server written in Go.
- 0-stor - Single device object store.
- Dispatch Protocol - Blockchain protocol for distributed application data analytics.
- GarageMQ - AMQP server written in Go.
- RedixDB - A real-time persistent key-value store with the same redis protocol.
- BBVA - Raft backend implementation using BadgerDB for Hashicorp raft.
- Fantom - aBFT Consensus platform for distributed applications.
- decred - An open, progressive, and self-funding cryptocurrency with a system of community-based governance integrated into its blockchain.
- OpenNetSys - Create useful dApps in any software language.
- HoneyTrap - An extensible and opensource system for running, monitoring and managing honeypots.
- Insolar - Enterprise-ready blockchain platform.
- IoTeX - The next generation of the decentralized network for IoT powered by scalability- and privacy-centric blockchains.
- go-sessions - The sessions manager for Go net/http and fasthttp.
- Babble - BFT Consensus platform for distributed applications.
- Tormenta - Embedded object-persistence layer / simple JSON database for Go projects.
- BadgerHold - An embeddable NoSQL store for querying Go types built on Badger
- Goblero - Pure Go embedded persistent job queue backed by BadgerDB
- Surfline - Serving global wave and weather forecast data with Badger.
- Cete - Simple and highly available distributed key-value store built on Badger. Makes it easy bringing up a cluster of Badger with Raft consensus algorithm by hashicorp/raft.
- Volument - A new take on website analytics backed by Badger.
- KVdb - Hosted key-value store and serverless platform built on top of Badger.
- Terminotes - Self hosted notes storage and search server - storage powered by BadgerDB
- Pyroscope - Open source continuous profiling platform built with BadgerDB
- Veri - A distributed feature store optimized for Search and Recommendation tasks.
- bIter - A library and Iterator interface for working with
the
badger.Iterator
, simplifying from-to, and prefix mechanics. - ld - (Lean Database) A very simple gRPC-only key-value database, exposing BadgerDB with key-range scanning semantics.
- Souin - A RFC compliant HTTP cache with lot of other features based on Badger for the storage. Compatible with all existing reverse-proxies.
- Xuperchain - A highly flexible blockchain architecture with great transaction performance.
- m2 - A simple http key/value store based on the raft protocol.
- chaindb - A blockchain storage layer used by Gossamer, a Go client for the Polkadot Network.
- vxdb - Simple schema-less Key-Value NoSQL database with simplest API interface.
- Opacity - Backend implementation for the Opacity storage project
- Vephar - A minimal key/value store using hashicorp-raft for cluster coordination and Badger for data storage.
- gowarcserver - Open-source server for warc files. Can be used in conjunction with pywb
- flow-go - A fast, secure, and developer-friendly blockchain built to support the next generation of games, apps and the digital assets that power them.
- Wrgl - A data version control system that works like Git but specialized to store and diff CSV.
- Loggie - A lightweight, cloud-native data transfer agent and aggregator.
- raft-badger - raft-badger implements LogStore and StableStore Interface of hashcorp/raft. it is used to store raft log and metadata of hashcorp/raft.
- DVID - A dataservice for branched versioning of a variety of data types. Originally created for large-scale brain reconstructions in Connectomics.
- KVS - A library for making it easy to persist, load and query full structs into BadgerDB, using an ownership hierarchy model.
- LLS - LLS is an efficient URL Shortener that can be used to shorten links and track link usage. Support for BadgerDB and MongoDB. Improved performance by more than 30% when using BadgerDB
- lakeFS - lakeFS is an open-source data version control that transforms your object storage to Git-like repositories. lakeFS uses BadgerDB for its underlying local metadata KV store implementation
- Goptivum - Goptivum is a better frontend and API for the Vulcan Optivum schedule program
- ActionManager - A dynamic entity manager based on rjsf schema and badger db
- MightyMap - Mightymap: Conveys both robustness and high capability, fitting for a powerful concurrent map.
- FlowG - A low-code log processing facility
- Bluefin - Bluefin is a TUNA Proof of Work miner for the Fortuna smart contract on the Cardano blockchain
- cDNSd - A Cardano blockchain backed DNS server daemon
- Dingo - A Cardano blockchain data node
If you are using Badger in a project please send a pull request to add it to the list.
Contributing
If you're interested in contributing to Badger see CONTRIBUTING.
Contact
- Please use Github issues for filing bugs.
- Please use discuss.dgraph.io for questions, discussions, and feature requests.
- Follow us on Twitter @dgraphlabs.
Top Related Projects
An open-source runtime for composable workflows. Great for AI agents and CI/CD.
Workflow Engine for Kubernetes
Run your GitHub Actions locally 🚀
Harness Open Source is an end-to-end developer platform with Source Control Management, CI/CD Pipelines, Hosted Developer Environments, and Artifact Registries.
Concourse is a container-based automation system written in Go.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot