Convert Figma logo to code with AI

rancher logofleet

Deploy workloads from Git to large fleets of Kubernetes clusters

1,500
219
1,500
237

Top Related Projects

6,459

Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.

17,684

Declarative Continuous Deployment for Kubernetes

4,375

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

The Modern Application Platform.

OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

Quick Overview

Rancher Fleet is a GitOps-style continuous delivery system for Kubernetes. It allows you to manage potentially millions of clusters and deployments from a single Git repository, providing a scalable and efficient way to manage large-scale Kubernetes environments.

Pros

  • Scalable management of multiple Kubernetes clusters from a single source
  • GitOps-based approach for consistent and version-controlled deployments
  • Supports multi-cluster and multi-environment deployments
  • Integrates well with existing Rancher ecosystem

Cons

  • Steep learning curve for users new to GitOps or Kubernetes
  • Limited customization options compared to some other GitOps tools
  • Requires Rancher for full functionality, which may not be suitable for all environments
  • Documentation can be sparse or outdated in some areas

Getting Started

To get started with Rancher Fleet:

  1. Install Rancher on your Kubernetes cluster
  2. Enable Fleet in Rancher:
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo update
helm install fleet-crd rancher-latest/fleet-crd -n cattle-fleet-system --create-namespace
helm install fleet rancher-latest/fleet -n cattle-fleet-system
  1. Create a Git repository for your Fleet configuration
  2. Define your deployments using Fleet's YAML format
  3. Apply the configuration to your clusters:
kubectl apply -f https://raw.githubusercontent.com/rancher/fleet/master/examples/simple/fleet.yaml

For more detailed instructions, refer to the official Rancher Fleet documentation.

Competitor Comparisons

6,459

Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.

Pros of Flux

  • More mature project with a larger community and ecosystem
  • Supports a wider range of GitOps workflows and use cases
  • Offers built-in support for Helm charts and Kustomize

Cons of Flux

  • Steeper learning curve for beginners
  • Requires more manual configuration and setup
  • Less integrated with other Kubernetes management tools

Code Comparison

Fleet:

kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
  name: my-app
  namespace: fleet-default
spec:
  repo: https://github.com/rancher/fleet-examples
  paths:
  - simple

Flux:

apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
  name: my-app
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/fluxcd/flux2-kustomize-helm-example
  ref:
    branch: main

Both Fleet and Flux use custom resources to define Git repositories for GitOps. Fleet's configuration is simpler, while Flux offers more granular control over sync intervals and branch selection. Fleet's integration with Rancher makes it easier to set up and manage in Rancher-based environments, while Flux's flexibility and extensive feature set make it suitable for a wider range of use cases and complex GitOps workflows.

17,684

Declarative Continuous Deployment for Kubernetes

Pros of Argo CD

  • More mature project with a larger community and ecosystem
  • Supports a wider range of Kubernetes resources and custom resource definitions
  • Offers a user-friendly web UI for visualizing and managing deployments

Cons of Argo CD

  • Can be more complex to set up and configure initially
  • May require more resources to run, especially for larger deployments
  • Less integrated with other Rancher products compared to Fleet

Code Comparison

Argo CD application manifest:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp
spec:
  destination:
    namespace: default
    server: https://kubernetes.default.svc
  project: default
  source:
    path: kustomize-guestbook
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD

Fleet GitRepo manifest:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: myapp
  namespace: fleet-default
spec:
  repo: https://github.com/rancher/fleet-examples
  paths:
  - simple

Both Argo CD and Fleet are GitOps tools for Kubernetes, but they have different approaches and features. Argo CD focuses on application-level deployments with advanced syncing and rollback capabilities, while Fleet emphasizes multi-cluster management and simpler configuration. The choice between them depends on specific use cases and integration requirements within the existing infrastructure.

4,375

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Pros of Karmada

  • Offers more advanced multi-cluster scheduling and resource management capabilities
  • Provides native Kubernetes API compatibility, reducing the learning curve
  • Supports multiple deployment models, including centralized and decentralized approaches

Cons of Karmada

  • More complex setup and configuration compared to Fleet
  • Requires additional components and resources to run effectively
  • May have a steeper learning curve for teams new to multi-cluster management

Code Comparison

Fleet configuration example:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: my-app
  namespace: fleet-default
spec:
  repo: https://github.com/rancher/fleet-examples
  paths:
  - simple

Karmada configuration example:

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
  placement:
    clusterAffinity:
      clusterNames:
        - cluster1
        - cluster2

Both Fleet and Karmada aim to simplify multi-cluster management, but they take different approaches. Fleet focuses on GitOps-based deployments across clusters, while Karmada provides a more comprehensive multi-cluster orchestration solution with advanced scheduling and resource management features. The choice between the two depends on specific use cases and team expertise.

The Modern Application Platform.

Pros of KubeVela

  • More comprehensive application delivery platform with built-in OAM support
  • Extensible architecture allowing custom components and traits
  • Provides a higher-level abstraction for application management

Cons of KubeVela

  • Steeper learning curve due to additional concepts and abstractions
  • Less focus on multi-cluster management compared to Fleet
  • Potentially more complex setup for simple use cases

Code Comparison

KubeVela application definition:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: example-app
spec:
  components:
    - name: frontend
      type: webservice
      properties:
        image: nginx

Fleet GitRepo resource:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: example-repo
spec:
  repo: https://github.com/example/repo
  paths:
    - manifests

Summary

KubeVela offers a more comprehensive application delivery platform with OAM support and extensibility, while Fleet focuses on multi-cluster GitOps. KubeVela provides higher-level abstractions but may have a steeper learning curve. Fleet is simpler for basic GitOps workflows but may lack some advanced application management features. The choice between the two depends on specific use cases and requirements.

OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

Pros of OpenYurt

  • Designed specifically for edge computing scenarios, offering better support for edge-cloud synergy
  • Provides node autonomy, allowing edge nodes to operate independently when disconnected from the cloud
  • Includes YurtHub for efficient edge-cloud communication and traffic reduction

Cons of OpenYurt

  • More complex architecture, potentially requiring a steeper learning curve
  • Less focus on multi-cluster management compared to Fleet
  • May have limited applicability for non-edge use cases

Code Comparison

OpenYurt (edge node configuration):

apiVersion: apps.openyurt.io/v1alpha1
kind: NodePool
metadata:
  name: hangzhou
spec:
  type: Edge

Fleet (GitRepo resource):

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: my-app
spec:
  repo: https://github.com/rancher/fleet-examples

Both projects aim to enhance Kubernetes cluster management, but with different focuses. OpenYurt specializes in edge computing scenarios, providing features like node autonomy and efficient edge-cloud communication. Fleet, on the other hand, excels in multi-cluster management and GitOps-based deployments across various environments. The choice between the two depends on specific use cases and infrastructure requirements.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Introduction

FOSSA Status

Unit E2E Examples E2E Multi-Cluster Examples golangci-lint

Fleet is GitOps at scale. Fleet is designed to manage multiple clusters. It's also lightweight enough that it works great for a single cluster too, but it really shines when you get to a large scale. By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization.

Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source all resources are dynamically turned into Helm charts and Helm is used as the engine to deploy everything in the cluster. This gives a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster.

Quick Start

For more information, have a look at Fleet's documentation.

Install

Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure things to your cluster.

For instance, using Homebrew:

brew install helm

Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)

helm -n cattle-fleet-system install --create-namespace --wait \
    fleet-crd https://github.com/rancher/fleet/releases/download/v0.10.1/fleet-crd-0.10.1.tgz
helm -n cattle-fleet-system install --create-namespace --wait \
    fleet https://github.com/rancher/fleet/releases/download/v0.10.1/fleet-0.10.1.tgz

Add a Git Repo to watch

Change spec.repo to your git repo of choice. Kubernetes manifest files that should be deployed should be in /manifests in your repo.

cat > example.yaml << "EOF"
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: sample
  # This namespace is special and auto-wired to deploy to the local cluster
  namespace: fleet-local
spec:
  # Everything from this repo will be run in this cluster. You trust me right?
  repo: "https://github.com/rancher/fleet-examples"
  paths:
  - simple
EOF

kubectl apply -f example.yaml

Get Status

Get status of what Fleet is doing:

kubectl -n fleet-local get fleet

You should see something like this get created in your cluster.

kubectl get deploy frontend
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
frontend   3/3     3            3           116m

Enjoy and read the docs.

License

FOSSA Status

For developer and maintainer documentation, see DEVELOPING.md.