etcd
Distributed reliable key-value store for the most critical data of a distributed system
Top Related Projects
Distributed reliable key-value store for the most critical data of a distributed system
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Apache ZooKeeper
High-Performance server for NATS.io, the cloud and edge native messaging system.
Distributed transactional key-value database, originally created to complement TiDB
CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
Quick Overview
etcd is a distributed key-value store that provides a reliable way to store data across a cluster of machines. It's designed to be highly available, consistent, and fault-tolerant, making it an essential component in many distributed systems and container orchestration platforms like Kubernetes.
Pros
- Strong consistency and reliability through the Raft consensus algorithm
- High availability with automatic leader election and failover
- Supports watch operations for real-time updates on data changes
- Well-integrated with Kubernetes and other cloud-native technologies
Cons
- Can be complex to set up and manage for small-scale applications
- Performance may degrade with very large datasets or high write loads
- Limited support for complex queries compared to traditional databases
- Requires careful configuration for optimal performance in production environments
Code Examples
- Connecting to etcd and performing basic operations:
package main
import (
"context"
"fmt"
"log"
"time"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"localhost:2379"},
DialTimeout: 5 * time.Second,
})
if err != nil {
log.Fatal(err)
}
defer cli.Close()
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
_, err = cli.Put(ctx, "foo", "bar")
cancel()
if err != nil {
log.Fatal(err)
}
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
resp, err := cli.Get(ctx, "foo")
cancel()
if err != nil {
log.Fatal(err)
}
for _, ev := range resp.Kvs {
fmt.Printf("%s : %s\n", ev.Key, ev.Value)
}
}
- Using etcd watch to monitor changes:
package main
import (
"context"
"fmt"
"log"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"localhost:2379"},
})
if err != nil {
log.Fatal(err)
}
defer cli.Close()
watchChan := cli.Watch(context.Background(), "mykey")
for watchResp := range watchChan {
for _, event := range watchResp.Events {
fmt.Printf("Event received! %s %q : %q\n", event.Type, event.Kv.Key, event.Kv.Value)
}
}
}
- Using etcd transactions:
package main
import (
"context"
"fmt"
"log"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"localhost:2379"},
})
if err != nil {
log.Fatal(err)
}
defer cli.Close()
kv := clientv3.NewKV(cli)
_, err = kv.Txn(context.Background()).
If(clientv3.Compare(clientv3.Value("key"), "=", "value")).
Then(clientv3.OpPut("key", "new_value")).
Else(clientv3.OpPut("key", "value")).
Commit()
if err != nil {
log.Fatal(err)
}
resp, _ := kv.Get(context.Background(), "key")
fmt.Printf("Value: %s\n", string(resp.Kvs[0].Value))
}
Getting Started
To start using etcd in your Go project:
- Install etcd: `brew install et
Competitor Comparisons
Distributed reliable key-value store for the most critical data of a distributed system
Pros of etcd
- Widely adopted and battle-tested distributed key-value store
- Supports strong consistency and high availability
- Extensive documentation and community support
Cons of etcd
- Larger codebase and potentially more complex to maintain
- May have higher resource requirements for smaller deployments
Code Comparison
etcd:
func (s *EtcdServer) Start() {
s.start()
<-s.readyc
go s.linearizableReadLoop()
go s.monitorVersions()
}
etcd>:
// No direct code comparison available as etcd> is not a separate repository
// It appears to be a typo or incorrect reference
Summary
etcd is a well-established distributed key-value store with a strong focus on consistency and reliability. It offers robust features and extensive documentation, making it suitable for various distributed systems and Kubernetes deployments. However, its larger codebase may require more resources and maintenance effort compared to simpler alternatives.
The comparison to "etcd>" is not applicable, as it appears to be a typo or incorrect reference. There is no separate repository or project with that name. The comparison provided is based solely on the characteristics of the etcd project itself.
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Pros of Consul
- More comprehensive service mesh and service discovery features
- Built-in health checking and DNS interface
- Supports multiple data centers out of the box
Cons of Consul
- Higher complexity and steeper learning curve
- Potentially slower performance for simple key-value operations
- Requires more resources to run effectively
Code Comparison
Consul (setting a key-value pair):
kv := client.KV()
p := &api.KVPair{Key: "foo", Value: []byte("bar")}
_, err := kv.Put(p, nil)
etcd (setting a key-value pair):
ctx := context.Background()
_, err := cli.Put(ctx, "foo", "bar")
Both etcd and Consul are distributed key-value stores and service discovery systems, but they have different focuses and strengths. etcd is simpler and more lightweight, primarily designed for storing configuration data and supporting distributed systems. Consul offers a more comprehensive suite of features for service mesh and microservices architectures.
etcd generally performs better for simple key-value operations, while Consul excels in complex service discovery scenarios. The choice between the two often depends on specific project requirements and the broader ecosystem in use.
Apache ZooKeeper
Pros of ZooKeeper
- More mature and battle-tested, with a longer history in production environments
- Supports a wider range of programming languages and client libraries
- Offers more advanced features like dynamic reconfiguration and observer nodes
Cons of ZooKeeper
- Generally considered more complex to set up and maintain
- Slower write performance compared to etcd, especially in larger clusters
- Requires Java runtime, which may increase resource usage and deployment complexity
Code Comparison
ZooKeeper (Java):
ZooKeeper zk = new ZooKeeper("localhost:2181", 3000, watcher);
String path = zk.create("/mynode", "mydata".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
etcd (Go):
cli, _ := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
_, err := cli.Put(context.Background(), "mykey", "myvalue")
Both etcd and ZooKeeper are distributed key-value stores used for configuration management and service discovery. etcd is generally considered simpler and more lightweight, with better write performance in larger clusters. It's also designed to be more cloud-native and integrates well with Kubernetes. ZooKeeper, while more complex, offers a broader feature set and supports a wider range of programming languages, making it suitable for diverse environments and use cases.
High-Performance server for NATS.io, the cloud and edge native messaging system.
Pros of NATS Server
- Lightweight and high-performance messaging system
- Supports multiple messaging patterns (pub/sub, request/reply, etc.)
- Easy to set up and use with minimal configuration
Cons of NATS Server
- Limited built-in persistence options compared to etcd
- Less focus on distributed consensus and strong consistency
- Smaller ecosystem and fewer integrations with other tools
Code Comparison
NATS Server (Go):
nc, err := nats.Connect(nats.DefaultURL)
if err != nil {
log.Fatal(err)
}
defer nc.Close()
etcd (Go):
cli, err := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
if err != nil {
log.Fatal(err)
}
defer cli.Close()
Both examples show how to establish a connection to the respective systems. NATS Server uses a simpler URL-based connection, while etcd requires more configuration options.
NATS Server is primarily designed for high-performance messaging and communication between distributed systems. It excels in scenarios requiring low-latency message delivery and scalability.
etcd, on the other hand, is built as a distributed key-value store with a focus on strong consistency and reliability. It's better suited for storing critical configuration data and implementing distributed coordination.
The choice between NATS Server and etcd depends on the specific requirements of your project, such as the need for messaging vs. distributed storage, consistency guarantees, and integration with other tools in your stack.
Distributed transactional key-value database, originally created to complement TiDB
Pros of TiKV
- Designed for large-scale distributed storage with horizontal scalability
- Supports ACID transactions and provides strong consistency
- Offers multi-version concurrency control (MVCC) for better performance
Cons of TiKV
- Higher complexity and resource requirements compared to etcd
- Steeper learning curve and more challenging to set up and maintain
- Less mature ecosystem and community support
Code Comparison
TiKV (Rust):
use tikv_client::RawClient;
let client = RawClient::new(vec!["127.0.0.1:2379"]).await?;
client.put("key".to_owned(), "value".to_owned()).await?;
let value = client.get("key".to_owned()).await?;
etcd (Go):
cli, _ := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
defer cli.Close()
_, err := cli.Put(context.Background(), "key", "value")
resp, _ := cli.Get(context.Background(), "key")
TiKV is a distributed key-value store designed for large-scale applications, offering ACID transactions and strong consistency. It excels in horizontal scalability but comes with increased complexity. etcd, on the other hand, is simpler and easier to set up, making it more suitable for smaller-scale deployments and configuration management. The code examples demonstrate the basic operations in both systems, with TiKV using Rust and etcd using Go.
CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
Pros of CockroachDB
- Designed for horizontal scalability and distributed SQL operations
- Offers strong consistency and survivability in multi-region deployments
- Supports a wider range of SQL features and ACID transactions
Cons of CockroachDB
- Higher resource consumption and complexity compared to etcd
- Steeper learning curve and more challenging setup process
- May be overkill for simpler key-value storage needs
Code Comparison
etcd (Go):
cli, _ := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
defer cli.Close()
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
_, err := cli.Put(ctx, "key", "value")
cancel()
CockroachDB (SQL):
CREATE TABLE users (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING);
INSERT INTO users (name) VALUES ('Alice');
SELECT name FROM users WHERE id = 'some-uuid';
Both projects are written primarily in Go, but CockroachDB focuses on SQL operations while etcd provides a simpler key-value API. CockroachDB is more suitable for complex distributed database needs, while etcd excels in lightweight distributed configuration and service discovery scenarios.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
etcd
Note: The main
branch may be in an unstable or even broken state during development. For stable versions, see releases.
etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus on being:
- Simple: well-defined, user-facing API (gRPC)
- Secure: automatic TLS with optional client cert authentication
- Fast: benchmarked 10,000 writes/sec
- Reliable: properly distributed using Raft
etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log.
etcd is used in production by many companies, and the development team stands behind it in critical deployment scenarios, where etcd is frequently teamed with applications such as Kubernetes, locksmith, vulcand, Doorman, and many others. Reliability is further ensured by rigorous robustness testing.
See etcdctl for a simple command line client.
Original image credited to xkcd.com/2347, alterations by Josh Berkus.
Maintainers
Maintainers strive to shape an inclusive open source project culture where users are heard and contributors feel respected and empowered. Maintainers aim to build productive relationships across different companies and disciplines. Read more about Maintainers role and responsibilities.
Getting started
Getting etcd
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, and Docker on the release page.
For more installation guides, please check out play.etcd.io and operating etcd.
Running etcd
First start a single-member cluster of etcd.
If etcd is installed using the pre-built release binaries, run it from the installation location as below:
/tmp/etcd-download-test/etcd
The etcd command can be simply run as such if it is moved to the system path as below:
mv /tmp/etcd-download-test/etcd /usr/local/bin/
etcd
This will bring up etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.
Next, let's set a single key, and then retrieve it:
etcdctl put mykey "this is awesome"
etcdctl get mykey
etcd is now running and serving client requests. For more, please check out:
etcd TCP ports
The official etcd ports are 2379 for client requests, and 2380 for peer communication.
Running a local etcd cluster
First install goreman, which manages Procfile-based applications.
Our Procfile script will set up a local example cluster. Start it with:
goreman start
This will bring up 3 etcd members infra1
, infra2
and infra3
and optionally etcd grpc-proxy
, which runs locally and composes a cluster.
Every cluster member and proxy accepts key value reads and key value writes.
Follow the comments in Procfile script to add a learner node to the cluster.
Install etcd client v3
go get go.etcd.io/etcd/client/v3
Next steps
Now it's time to dig into the full etcd API and other guides.
- Read the full documentation.
- Review etcd frequently asked questions.
- Explore the full gRPC API.
- Set up a multi-machine cluster.
- Learn the config format, env variables and flags.
- Find language bindings and tools.
- Use TLS to secure an etcd cluster.
- Tune etcd.
Contact
- Email: etcd-dev
- Slack: #sig-etcd channel on Kubernetes (get an invite)
- Community meetings
Community meetings
etcd contributors and maintainers meet every week at 11:00
AM (USA Pacific) on Thursday and meetings alternate between community meetings and issue triage meetings. Meeting agendas are recorded in a shared Google doc and everyone is welcome to suggest additional topics or other agendas.
Issue triage meetings are aimed at getting through our backlog of PRs and Issues. Triage meetings are open to any contributor; you don't have to be a reviewer or approver to help out! They can also be a good way to get started contributing.
The meeting lead role is rotated for each meeting between etcd maintainers or sig-etcd leads and is recorded in a shared Google sheet.
Meeting recordings are uploaded to the official etcd YouTube channel.
Get calendar invitations by joining etcd-dev mailing group.
Join the CNCF-funded Zoom channel: zoom.us/my/cncfetcdproject
Contributing
See CONTRIBUTING for details on setting up your development environment, submitting patches and the contribution workflow.
Please refer to community-membership.md for information on becoming an etcd project member. We welcome and look forward to your contributions to the project!
Please also refer to roadmap to get more details on the priorities for the next few major or minor releases.
Reporting bugs
See reporting bugs for details about reporting any issues. Before opening an issue please check it is not covered in our frequently asked questions.
Reporting a security vulnerability
See security disclosure and release process for details on how to report a security vulnerability and how the etcd team manages it.
Issue and PR management
See issue triage guidelines for details on how issues are managed.
See PR management for guidelines on how pull requests are managed.
etcd Emeritus Maintainers
These emeritus maintainers dedicated a part of their career to etcd and reviewed code, triaged bugs and pushed the project forward over a substantial period of time. Their contribution is greatly appreciated.
- Fanmin Shi
- Anthony Romano
- Brandon Philips
- Joe Betz
- Gyuho Lee
- Jingyi Hu
- Xiang Li
- Ben Darnell
- Sam Batschelet
- Piotr Tabor
- Hitoshi Mitake
License
etcd is under the Apache 2.0 license. See the LICENSE file for details.
Top Related Projects
Distributed reliable key-value store for the most critical data of a distributed system
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
Apache ZooKeeper
High-Performance server for NATS.io, the cloud and edge native messaging system.
Distributed transactional key-value database, originally created to complement TiDB
CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot