Top Related Projects
Ceph is a distributed object, block, and file storage platform
Cloud-Native distributed storage built on and for Kubernetes
Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.
Gluster Filesystem : Build your distributed storage in minutes
Quick Overview
Rook is an open-source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Pros
- Seamless integration with Kubernetes for storage management
- Supports multiple storage providers (Ceph, EdgeFS, NFS, etc.)
- Automated deployment and management of storage clusters
- Highly scalable and designed for cloud-native environments
Cons
- Steep learning curve for users new to storage concepts
- Limited support for some legacy storage systems
- Complexity in troubleshooting due to multiple layers of abstraction
- Resource-intensive for small-scale deployments
Getting Started
To get started with Rook, follow these steps:
- Ensure you have a Kubernetes cluster running.
- Install Rook Operator:
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/crds.yaml
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/common.yaml
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/operator.yaml
- Deploy a Rook Ceph cluster:
kubectl create -f https://raw.githubusercontent.com/rook/rook/master/deploy/examples/cluster.yaml
- Verify the Rook deployment:
kubectl -n rook-ceph get pod
For more detailed instructions and advanced configurations, refer to the official Rook documentation.
Competitor Comparisons
Ceph is a distributed object, block, and file storage platform
Pros of Ceph
- Mature and battle-tested distributed storage system with a long history
- Highly scalable and flexible, supporting object, block, and file storage
- Rich feature set including snapshots, replication, and erasure coding
Cons of Ceph
- Complex to set up and manage without additional tooling
- Steeper learning curve for administrators
- Resource-intensive, requiring more hardware resources
Code Comparison
Ceph (C++):
int main(int argc, const char **argv) {
vector<const char*> args;
argv_to_vec(argc, argv, args);
auto cct = global_init(NULL, args, CEPH_ENTITY_TYPE_CLIENT,
CODE_ENVIRONMENT_UTILITY, 0);
common_init_finish(g_ceph_context);
}
Rook (Go):
func main() {
cmd.Execute()
}
func init() {
rootCmd.AddCommand(operatorCmd)
rootCmd.AddCommand(discoverCmd)
rootCmd.AddCommand(versionCmd)
}
The code snippets highlight the different languages and entry points for each project. Ceph's main function initializes the Ceph environment, while Rook's main function is more concise, delegating to a command execution structure typical of Go projects.
Cloud-Native distributed storage built on and for Kubernetes
Pros of Longhorn
- Simpler setup and management for basic use cases
- Native cloud-native storage solution specifically designed for Kubernetes
- Built-in support for data locality and storage efficiency features
Cons of Longhorn
- Limited support for advanced storage features compared to Rook
- Smaller ecosystem and community support
- Less flexibility in terms of underlying storage providers
Code Comparison
Longhorn deployment:
apiVersion: longhorn.io/v1beta1
kind: Volume
metadata:
name: example-volume
spec:
size: 10Gi
numberOfReplicas: 3
Rook Ceph deployment:
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 3
Both Longhorn and Rook provide Kubernetes-native storage solutions, but they differ in complexity and feature sets. Longhorn offers a more straightforward approach for basic storage needs, while Rook provides greater flexibility and support for various storage backends, including Ceph. The code examples demonstrate the simplicity of Longhorn's volume creation compared to Rook's more detailed configuration for Ceph block pools.
Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
Pros of OpenEBS
- Simpler architecture and easier to set up for basic use cases
- Native support for local PV provisioning without additional components
- Better performance for certain workloads due to its architecture
Cons of OpenEBS
- Less mature and feature-rich compared to Rook
- Limited support for advanced storage features and configurations
- Smaller community and ecosystem compared to Rook
Code Comparison
OpenEBS (StorageClass example):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-jiva-default
provisioner: openebs.io/provisioner-iscsi
parameters:
openebs.io/storage-pool-name: "default"
openebs.io/jiva-replica-count: "3"
Rook (StorageClass example):
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 3
Both projects aim to provide cloud-native storage solutions for Kubernetes, but they differ in their approach and feature set. OpenEBS focuses on simplicity and ease of use, while Rook offers more advanced features and supports multiple storage backends. The code examples show the difference in configuration complexity, with OpenEBS using a standard Kubernetes StorageClass and Rook requiring custom resources for more granular control.
MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.
Pros of MinIO
- Simpler setup and deployment, especially for standalone object storage
- Native S3 compatibility, making it easier to integrate with existing S3-based applications
- Lightweight and can run on a wide range of hardware, from small devices to large clusters
Cons of MinIO
- Less comprehensive storage management features compared to Rook's broader ecosystem support
- Limited to object storage, while Rook supports multiple storage types (block, file, and object)
- May require additional tools for advanced cluster management and monitoring
Code Comparison
MinIO (Go):
func (xl xlObjects) PutObject(ctx context.Context, bucket, object string, data *PutObjReader, opts ObjectOptions) (objInfo ObjectInfo, err error) {
// MinIO-specific object storage implementation
}
Rook (Go):
func (c *Cluster) createCephFS(clusterInfo *cephclient.ClusterInfo) error {
// Rook-specific CephFS creation logic
}
The code snippets highlight the different focus areas:
- MinIO deals directly with object storage operations
- Rook manages broader storage cluster configurations, including CephFS
Both projects use Go, but their implementations reflect their distinct purposes in the storage ecosystem.
Gluster Filesystem : Build your distributed storage in minutes
Pros of GlusterFS
- Mature and battle-tested distributed file system with a long history
- Supports a wide range of use cases and deployment scenarios
- Offers flexible volume types and data replication options
Cons of GlusterFS
- Can be complex to set up and manage, especially for large clusters
- Performance may degrade with certain workloads or configurations
- Less integrated with cloud-native ecosystems compared to Rook
Code Comparison
GlusterFS volume creation:
gluster volume create test-volume replica 2 server1:/exp1 server2:/exp2
gluster volume start test-volume
Rook-Ceph storage cluster creation:
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
spec:
dataDirHostPath: /var/lib/rook
mon:
count: 3
GlusterFS focuses on traditional distributed storage setups, while Rook provides a more Kubernetes-native approach to storage orchestration. GlusterFS offers more direct control over volume creation and management, whereas Rook abstracts these operations through Kubernetes custom resources. Rook's integration with Kubernetes makes it easier to deploy and manage storage in cloud-native environments, but GlusterFS may be preferred in scenarios where more fine-grained control over storage configuration is required.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
What is Rook?
Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for Ceph storage to natively integrate with Kubernetes.
Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters.
Rook automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services. The Rook operator does this by building on Kubernetes resources to deploy, configure, provision, scale, upgrade, and monitor Ceph.
The status of the Ceph storage provider is Stable. Features and improvements will be planned for many future versions. Upgrades between versions are provided to ensure backward compatibility between releases.
Rook is hosted by the Cloud Native Computing Foundation (CNCF) as a graduated level project. If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Rook plays a role, read the CNCF announcement.
Getting Started and Documentation
For installation, deployment, and administration, see our Documentation and QuickStart Guide.
Contributing
We welcome contributions. See Contributing to get started.
Report a Bug
For filing bugs, suggesting improvements, or requesting new features, please open an issue.
Reporting Security Vulnerabilities
If you find a vulnerability or a potential vulnerability in Rook please let us know immediately at cncf-rook-security@lists.cncf.io. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issues positively or negatively.
For further details, please see the complete security release process.
Contact
Please use the following to reach members of the community:
- Slack: Join our slack channel
- GitHub: Start a discussion or open an issue
- Twitter: @rook_io
- Security topics: cncf-rook-security@lists.cncf.io
Community Meeting
A regular community meeting takes place the 2nd Tuesday of every month at 9:00 AM PT (Pacific Time). Convert to your local timezone.
Any changes to the meeting schedule will be added to the agenda doc and posted to Slack #announcements.
Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.
- Meeting link: https://zoom.us/j/98052644520?pwd=K0R4RUZCc3NhQisyMnA5VlV2MVBhQT09
- Current agenda and past meeting notes
- Past meeting recordings
Official Releases
Official releases of Rook can be found on the releases page. Please note that it is strongly recommended that you use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed and even removed at any time without compatibility support and without prior notice.
Licensing
Rook is under the Apache 2.0 license.
Top Related Projects
Ceph is a distributed object, block, and file storage platform
Cloud-Native distributed storage built on and for Kubernetes
Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.
Gluster Filesystem : Build your distributed storage in minutes
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot